Beyond Majority Voting: LLM Aggregation by Leveraging Higher-Order Information
By: Rui Ai , Yuqi Pan , David Simchi-Levi and more
Potential Business Impact:
Better answers from many AI brains.
With the rapid progress of multi-agent large language model (LLM) reasoning, how to effectively aggregate answers from multiple LLMs has emerged as a fundamental challenge. Standard majority voting treats all answers equally, failing to consider latent heterogeneity and correlation across models. In this work, we design two new aggregation algorithms called Optimal Weight (OW) and Inverse Surprising Popularity (ISP), leveraging both first-order and second-order information. Our theoretical analysis shows these methods provably mitigate inherent limitations of majority voting under mild assumptions, leading to more reliable collective decisions. We empirically validate our algorithms on synthetic datasets, popular LLM fine-tuning benchmarks such as UltraFeedback and MMLU, and a real-world healthcare setting ARMMAN. Across all cases, our methods consistently outperform majority voting, offering both practical performance gains and conceptual insights for the design of robust multi-agent LLM pipelines.
Similar Papers
The Majority is not always right: RL training for solution aggregation
Computation and Language
Makes AI smarter by teaching it to pick the best answer.
Mixture of Thoughts: Learning to Aggregate What Experts Think, Not Just What They Say
Machine Learning (CS)
Combines AI brains to solve harder problems.
Quantifying and Mitigating Selection Bias in LLMs: A Transferable LoRA Fine-Tuning and Efficient Majority Voting Approach
Computation and Language
Makes AI answer questions more fairly.