Score: 2

Mixture-of-Minds: Multi-Agent Reinforcement Learning for Table Understanding

Published: October 23, 2025 | arXiv ID: 2510.20176v1

By: Yuhang Zhou , Mingrui Zhang , Ke Li and more

BigTech Affiliations: Meta

Potential Business Impact:

Helps computers understand and answer questions from tables.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Understanding and reasoning over tables is a critical capability for many real-world applications. Large language models (LLMs) have shown promise on this task, but current approaches remain limited. Fine-tuning based methods strengthen language reasoning; yet they are prone to arithmetic errors and hallucination. In contrast, tool-based methods enable precise table manipulation but rely on rigid schemas and lack semantic understanding. These complementary drawbacks highlight the need for approaches that integrate robust reasoning with reliable table processing. In this work, we propose Mixture-of-Minds, a multi-agent framework that decomposes table reasoning into three specialized roles: planning, coding, and answering. This design enables each agent to focus on a specific aspect of the task while leveraging code execution for precise table manipulation. Building on this workflow, we introduce a self-improvement training framework that employs Monte Carlo Tree Search (MCTS) rollouts to generate pseudo-gold trajectories and optimize agents with reinforcement learning (RL). Extensive experiments show that Mixture-of-Minds delivers substantial gains, reaching 62.13% on TableBench and surpassing OpenAI-o4-mini-high. These results demonstrate the promise of combining structured multi-agent workflows with RL to advance table understanding.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Computation and Language