Score: 1

Beyond Majority Voting: LLM Aggregation by Leveraging Higher-Order Information

Published: October 1, 2025 | arXiv ID: 2510.01499v1

By: Rui Ai , Yuqi Pan , David Simchi-Levi and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Better answers from many AI brains.

Business Areas:
Crowdsourcing Collaboration

With the rapid progress of multi-agent large language model (LLM) reasoning, how to effectively aggregate answers from multiple LLMs has emerged as a fundamental challenge. Standard majority voting treats all answers equally, failing to consider latent heterogeneity and correlation across models. In this work, we design two new aggregation algorithms called Optimal Weight (OW) and Inverse Surprising Popularity (ISP), leveraging both first-order and second-order information. Our theoretical analysis shows these methods provably mitigate inherent limitations of majority voting under mild assumptions, leading to more reliable collective decisions. We empirically validate our algorithms on synthetic datasets, popular LLM fine-tuning benchmarks such as UltraFeedback and MMLU, and a real-world healthcare setting ARMMAN. Across all cases, our methods consistently outperform majority voting, offering both practical performance gains and conceptual insights for the design of robust multi-agent LLM pipelines.

Country of Origin
🇺🇸 United States

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)