Do Multi-Agents Solve Better Than Single? Evaluating Agentic Frameworks for Diagram-Grounded Geometry Problem Solving and Reasoning
By: Mahbub E Sobhani , Md. Faiyaz Abdullah Sayeedi , Mohammad Nehad Alam and more
Potential Business Impact:
Makes AI better at solving math problems.
Diagram-grounded geometry problem solving is a critical benchmark for multimodal large language models (MLLMs), yet the benefits of multi-agent design over single-agent remain unclear. We systematically compare single-agent and multi-agent pipelines on four visual math benchmarks: Geometry3K, MathVerse, OlympiadBench, and We-Math. For open-source models, multi-agent consistently improves performance. For example, Qwen-2.5-VL (7B) gains +6.8 points and Qwen-2.5-VL (32B) gains +3.3 on Geometry3K, and both Qwen-2.5-VL variants see further gains on OlympiadBench and We-Math. In contrast, the closed-source Gemini-2.0-Flash generally performs better in single-agent mode on classic benchmarks, while multi-agent yields only modest improvements on the newer We-Math dataset. These findings show that multi-agent pipelines provide clear benefits for open-source models and can assist strong proprietary systems on newer, less familiar benchmarks, but agentic decomposition is not universally optimal. All code, data, and reasoning files are available at https://github.com/faiyazabdullah/Interpreter-Solver
Similar Papers
Multi-Agent Collaborative Framework For Math Problem Generation
Multiagent Systems
Creates math problems that are just right.
Query Optimization Beyond Data Systems: The Case for Multi-Agent Systems
Databases
Makes AI teams work smarter, faster, and cheaper.
PublicAgent: Multi-Agent Design Principles From an LLM-Based Open Data Analysis Framework
Artificial Intelligence
Lets anyone ask questions of data without being a computer expert.