Learning to Trust the Crowd: A Multi-Model Consensus Reasoning Engine for Large Language Models
By: Pranav Kallem
Potential Business Impact:
Makes AI answers more truthful and correct.
Large language models (LLMs) achieve strong aver- age performance yet remain unreliable at the instance level, with frequent hallucinations, brittle failures, and poorly calibrated confidence. We study reliability through the lens of multi-model consensus: given responses from several heterogeneous LLMs, can we learn which answer is most likely correct for a given query? We introduce a Multi-Model Consensus Reasoning Engine that treats the set of LLM outputs as input to a supervised meta-learner. The system maps natural language responses into structured features using semantic embeddings, pairwise similarity and clustering statistics, lexical and structural cues, reasoning-quality scores, confidence estimates, and model-specific priors, and then applies gradient-boosted trees, listwise ranking, and graph neural networks over similarity graphs of answers. Using three open-weight LLMs evaluated on compact, resource- constrained subsets of GSM8K, ARC-Challenge, HellaSwag, and TruthfulQA, our best graph-attention-based consensus model improves macro-average accuracy by 4.6 percentage points over the strongest single LLM and by 8.1 points over majority vote, while also yielding lower Brier scores and fewer TruthfulQA hal- lucinations. Ablation and feature-importance analyses show that semantic agreement and clustering features are most influential, with reasoning-quality and model-prior features providing com- plementary gains, suggesting supervised multi-model consensus is a practical route toward more reliable LLM behavior, even in a modest single-machine setup.
Similar Papers
Consensus Is All You Need: Gossip-Based Reasoning Among Large Language Models
Multiagent Systems
AI models work together to give better answers.
CURE: Confidence-driven Unified Reasoning Ensemble Framework for Medical Question Answering
Computation and Language
Helps doctors answer questions without expensive computers.
A Hashgraph-Inspired Consensus Mechanism for Reliable Multi-Model Reasoning
Artificial Intelligence
Makes AI answers more truthful by checking many AIs.