Explicit Reasoning Makes Better Judges: A Systematic Study on Accuracy, Efficiency, and Robustness
By: Pratik Jayarao , Himanshu Gupta , Neeraj Varshney and more
Potential Business Impact:
Computers that "think" judge better than those that don't.
As Large Language Models (LLMs) are increasingly adopted as automated judges in benchmarking and reward modeling, ensuring their reliability, efficiency, and robustness has become critical. In this work, we present a systematic comparison of "thinking" and "non-thinking" LLMs in the LLM-as-a-judge paradigm using open-source Qwen 3 models of relatively small sizes (0.6B, 1.7B, and 4B parameters). We evaluate both accuracy and computational efficiency (FLOPs) on RewardBench tasks, and further examine augmentation strategies for non-thinking models, including in-context learning, rubric-guided judging, reference-based evaluation, and n-best aggregation. Our results show that despite these enhancements, non-thinking models generally fall short of their thinking counterparts. Our results show that thinking models achieve approximately 10% points higher accuracy with little overhead (under 2x), in contrast to augmentation strategies like few-shot learning, which deliver modest gains at a higher cost (>8x). Bias and robustness analyses further demonstrate that thinking models maintain significantly greater consistency under a variety of bias conditions such as positional, bandwagon, identity, diversity, and random biases (6% higher on average). We further extend our experiments to the multilingual setting and our results confirm that explicit reasoning extends its benefits beyond English. Overall, our work results in several important findings that provide systematic evidence that explicit reasoning offers clear advantages in the LLM-as-a-judge paradigm not only in accuracy and efficiency but also in robustness.
Similar Papers
Assessing Judging Bias in Large Reasoning Models: An Empirical Study
Computers and Society
Makes AI judges fairer and more trustworthy.
JudgeLRM: Large Reasoning Models as a Judge
Computation and Language
Makes AI judges better at thinking hard problems.
Through the Judge's Eyes: Inferred Thinking Traces Improve Reliability of LLM Raters
Artificial Intelligence
Helps computers explain their answers better.