Distribution-Calibrated Inference time compute for Thinking LLM-as-a-Judge
By: Hamid Dadkhahi , Firas Trabelsi , Parker Riley and more
Potential Business Impact:
Makes AI judges more trustworthy for picking best answers.
Thinking Large Language Models (LLMs) used as judges for pairwise preferences remain noisy at the single-sample level, and common aggregation rules (majority vote, soft self-consistency, or instruction-based self-aggregation) are inconsistent when ties are allowed. We study inference-time compute (ITC) for evaluators that generate n independent thinking-rating samples per item, and propose a principled, distribution-calibrated aggregation scheme. Our method models three-way preferences with a Bradley-Terry-Davidson formulation on rating counts, leveraging both polarity (margin among non-ties) and decisiveness (non-tie rate) to distinguish narrow margins from strong consensus. Across various evaluation benchmarks, our approach consistently reduces MAE and increases pairwise accuracy versus standard baselines, and when evaluated against human-consensus meta-labels, matches or exceeds individual human raters. These results show that carefully allocating ITC and aggregating with distribution-aware methods turns noisy individual model judgments into reliable ratings for evaluation.
Similar Papers
Think Deep, Think Fast: Investigating Efficiency of Verifier-free Inference-time-scaling Methods
Artificial Intelligence
Makes AI better at thinking and answering questions.
Through the Judge's Eyes: Inferred Thinking Traces Improve Reliability of LLM Raters
Artificial Intelligence
Helps computers explain their answers better.
TrustJudge: Inconsistencies of LLM-as-a-Judge and How to Alleviate Them
Artificial Intelligence
Makes AI judges more fair and accurate.