Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
By: Tianyu Hu , Zhen Tan , Song Wang and more
Potential Business Impact:
Debating computers make better judgments than voting ones.
With advancements in reasoning capabilities, Large Language Models (LLMs) are increasingly employed for automated judgment tasks. While LLMs-as-Judges offer promise in automating evaluations, current approaches often rely on simplistic aggregation methods (e.g., majority voting), which can fail even when individual agents provide correct answers. To address this, we propose a multi-agent debate judge framework where agents collaboratively reason and iteratively refine their responses. We formalize the debate process mathematically, analyzing agent interactions and proving that debate amplifies correctness compared to static ensembles. To enhance efficiency, we introduce a stability detection mechanism that models judge consensus dynamics via a time-varying Beta-Binomial mixture, with adaptive stopping based on distributional similarity (Kolmogorov-Smirnov test). This mechanism models the judges' collective correct rate dynamics using a time-varying mixture of Beta-Binomial distributions and employs an adaptive stopping criterion based on distributional similarity (Kolmogorov-Smirnov statistic). Experiments across multiple benchmarks and models demonstrate that our framework improves judgment accuracy over majority voting while maintaining computational efficiency.
Similar Papers
Efficient LLM Safety Evaluation through Multi-Agent Debate
Artificial Intelligence
Makes AI safer and cheaper to test.
Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications
Computation and Language
Helps computers judge writing better than people.
Who Judges the Judge? LLM Jury-on-Demand: Building Trustworthy LLM Evaluation Systems
Artificial Intelligence
Makes AI judges more trustworthy for important jobs.