MAR:Multi-Agent Reflexion Improves Reasoning Abilities in LLMs
By: Onat Ozer , Grace Wu , Yuchen Wang and more
LLMs have shown the capacity to improve their performance on reasoning tasks through reflecting on their mistakes, and acting with these reflections in mind. However, continual reflections of the same LLM onto itself exhibit degeneration of thought, where the LLM continues to repeat the same errors again and again even with the knowledge that its wrong. To address this problem, we instead introduce multi-agent with multi-persona debators as the method to generate reflections. Through out extensive experimentation, we've found that the leads to better diversity of in the reflections generated by the llm agent. We demonstrate an accuracy of 47% EM HotPot QA (question answering) and 82.7% on HumanEval (programming), both performances surpassing reflection with a single llm.
Similar Papers
Meta-Thinking in LLMs via Multi-Agent Reinforcement Learning: A Survey
Artificial Intelligence
Makes AI think about its own thinking better.
Adaptive Reasoning Executor: A Collaborative Agent System for Efficient Reasoning
Artificial Intelligence
Smarter AI answers questions faster, cheaper.
Mitigating Manipulation and Enhancing Persuasion: A Reflective Multi-Agent Approach for Legal Argument Generation
Artificial Intelligence
Makes AI write fair legal arguments, not fake ones.