Score: 1

Reasoning Model Is Superior LLM-Judge, Yet Suffers from Biases

Published: January 7, 2026 | arXiv ID: 2601.03630v1

By: Hui Huang , Xuanxin Wu , Muyun Yang and more

Potential Business Impact:

Makes AI judges fairer and more accurate.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper presents the first systematic comparison investigating whether Large Reasoning Models (LRMs) are superior judge to non-reasoning LLMs. Our empirical analysis yields four key findings: 1) LRMs outperform non-reasoning LLMs in terms of judgment accuracy, particularly on reasoning-intensive tasks; 2) LRMs demonstrate superior instruction-following capabilities in evaluation contexts; 3) LRMs exhibit enhanced robustness against adversarial attacks targeting judgment tasks; 4) However, LRMs still exhibit strong biases in superficial quality. To improve the robustness against biases, we propose PlanJudge, an evaluation strategy that prompts the model to generate an explicit evaluation plan before execution. Despite its simplicity, our experiments demonstrate that PlanJudge significantly mitigates biases in both LRMs and standard LLMs.

Country of Origin
🇨🇳 🇯🇵 China, Japan

Page Count
11 pages

Category
Computer Science:
Computation and Language