Incentivizing Agentic Reasoning in LLM Judges via Tool-Integrated Reinforcement Learning
By: Ran Xu , Jingjing Chen , Jiayu Ye and more
Potential Business Impact:
Helps computers check answers using math.
Large Language Models (LLMs) are widely used as judges to evaluate response quality, providing a scalable alternative to human evaluation. However, most LLM judges operate solely on intrinsic text-based reasoning, limiting their ability to verify complex constraints or perform accurate computation. Motivated by the success of tool-integrated reasoning (TIR) in numerous tasks, we propose TIR-Judge, an end-to-end RL framework for training LLM judges that integrates a code executor for precise evaluation. TIR-Judge is built on three principles: (i) diverse training across verifiable and non-verifiable domains, (ii) flexible judgment formats (pointwise, pairwise, listwise), and (iii) iterative RL that bootstraps directly from the initial model without distillation. On seven public benchmarks, TIR-Judge surpasses strong reasoning-based judges by up to 6.4% (pointwise) and 7.7% (pairwise), and achieves listwise performance comparable to Claude-Opus-4 despite having only 8B parameters. Remarkably, TIR-Judge-Zero - trained entirely without distilled judge trajectories, matches the performance of distilled variants, demonstrating that tool-augmented judges can self-evolve through iterative reinforcement learning.
Similar Papers
JudgeLRM: Large Reasoning Models as a Judge
Computation and Language
Makes AI judges better at thinking hard problems.
J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning
Computation and Language
Teaches AI to judge answers better by thinking.
Process-Supervised Reinforcement Learning for Interactive Multimodal Tool-Use Agents
Computation and Language
Teaches computers to use tools with voice commands.