AdaJudge: Adaptive Multi-Perspective Judging for Reward Modeling
By: Yongliang Miao, Yangyang Liang, Mengnan Du
Reward modeling is essential for aligning large language models with human preferences, yet predominant architectures rely on a static pooling strategy to condense sequences into scalar scores. This paradigm, however, suffers from two key limitations: a static inductive bias that misaligns with task-dependent preference signals, and a representational mismatch, as the backbone is optimized for generation rather than fine-grained discrimination. To address this, we propose AdaJudge, a unified framework that jointly adapts representation and aggregation. AdaJudge first refines backbone representations into a discrimination-oriented space via gated refinement blocks. It then replaces the static readout with an adaptive multi-view pooling module that dynamically routes and combines evidence. Extensive experiments on RM-Bench and JudgeBench show that AdaJudge outperforms strong off-the-shelf reward models and traditional pooling baselines.
Similar Papers
Approximating Human Preferences Using a Multi-Judge Learned System
Artificial Intelligence
Makes AI understand what people truly want.
Better Language Model-Based Judging Reward Modeling through Scaling Comprehension Boundaries
Computation and Language
Makes AI better at judging answers by understanding context.
Judge Model for Large-scale Multimodality Benchmarks
Machine Learning (CS)
Tests AI's understanding of pictures, sound, and words.