Exploring Design of Multi-Agent LLM Dialogues for Research Ideation
By: Keisuke Ueda , Wataru Hirota , Takuto Asakura and more
Potential Business Impact:
Helps AI brainstorm better science ideas.
Large language models (LLMs) are increasingly used to support creative tasks such as research idea generation. While recent work has shown that structured dialogues between LLMs can improve the novelty and feasibility of generated ideas, the optimal design of such interactions remains unclear. In this study, we conduct a comprehensive analysis of multi-agent LLM dialogues for scientific ideation. We compare different configurations of agent roles, number of agents, and dialogue depth to understand how these factors influence the novelty and feasibility of generated ideas. Our experimental setup includes settings where one agent generates ideas and another critiques them, enabling iterative improvement. Our results show that enlarging the agent cohort, deepening the interaction depth, and broadening agent persona heterogeneity each enrich the diversity of generated ideas. Moreover, specifically increasing critic-side diversity within the ideation-critique-revision loop further boosts the feasibility of the final proposals. Our findings offer practical guidelines for building effective multi-agent LLM systems for scientific ideation. Our code is available at https://github.com/g6000/MultiAgent-Research-Ideator.
Similar Papers
Deep Ideation: Designing LLM Agents to Generate Novel Research Ideas on Scientific Concept Network
Artificial Intelligence
Finds new science ideas better than before.
The Social Laboratory: A Psychometric Framework for Multi-Agent LLM Evaluation
Artificial Intelligence
AI agents learn to agree and persuade each other.
Harnessing Multi-Agent LLMs for Complex Engineering Problem-Solving: A Framework for Senior Design Projects
Multiagent Systems
Helps student teams solve hard projects together.