{"props":{"pageProps":{"router":{"route":"/item","pathname":"/item","query":{"id":"43331673"},"asPath":"/item?id=43331673","isFallback":false,"basePath":"","isReady":false,"isPreview":false,"isLocaleDomain":false}}},"page":"/item","query":{"id":"43331673"},"buildId":"jm9oXtgCi-NbCB0xJEPYA","isFallback":false,"rsc":true,"customServer":true,"scriptLoader":[]}class="main">
> Our proposed framework is a novel role-playing approach for studying multiple communicative agents. Specifically, we concentrate on task-oriented role-playing that involves one AI assistant and one AI user. After the multi-agent system receives a preliminary idea and the role assignment from human users, a task-specifier agent will provide a detailed description to make the idea specific. Afterwards, the AI assistant and AI user will cooperate on completing the specified task through multi-turn conversations until the AI user determines the task is done. The AI user is responsible for giving instructions to the AI assistant and directing the conversation toward task completion. On the other hand, the AI assistant is designed to follow the instructions from the AI user and respond with specific solutions. (below Figure 1)
This appears to be another perpetual information machine. You really need to have a human in the loop at some level. I agree that you can go very far with a good initial prompt, but once you hit the first ambiguity you need an external signal to correct. This stuff goes sideways quickly with bad assumptions.
The best I've been able to achieve with "multi-agent" is to recursively invoke prompts and pass a summary of the prior context + request each time. The prompts are effectively agents, each with a goal to narrow and re-focus context as the task progresses through the tool call stack. I have never seen multiple agents talking to each other autonomously evolve into anything the business would care about.