PanelTR: Zero-Shot Table Reasoning Framework Through Multi-Agent Scientific Discussion
By: Yiran Rex Ma
Potential Business Impact:
Makes computers understand charts without extra training.
Table reasoning, including tabular QA and fact verification, often depends on annotated data or complex data augmentation, limiting flexibility and generalization. LLMs, despite their versatility, often underperform compared to simple supervised models. To approach these issues, we introduce PanelTR, a framework utilizing LLM agent scientists for robust table reasoning through a structured scientific approach. PanelTR's workflow involves agent scientists conducting individual investigations, engaging in self-review, and participating in collaborative peer-review discussions. This process, driven by five scientist personas, enables semantic-level transfer without relying on data augmentation or parametric optimization. Experiments across four benchmarks show that PanelTR outperforms vanilla LLMs and rivals fully supervised models, all while remaining independent of training data. Our findings indicate that structured scientific methodology can effectively handle complex tasks beyond table reasoning with flexible semantic understanding in a zero-shot context.
Similar Papers
Table-r1: Self-supervised and Reinforcement Learning for Program-based Table Reasoning in Small Language Models
Machine Learning (CS)
Helps small computers understand tables like big ones.
Utilizing Training Data to Improve LLM Reasoning for Tabular Understanding
Machine Learning (CS)
Helps computers understand data tables better.
TableMind: An Autonomous Programmatic Agent for Tool-Augmented Table Reasoning
Artificial Intelligence
Helps computers understand and answer questions from tables.