Beyond Single-Point Judgment: Distribution Alignment for LLM-as-a-Judge
By: Luyu Chen , Zeyu Zhang , Haoran Tan and more
Potential Business Impact:
Makes AI judges understand opinions better.
LLMs have emerged as powerful evaluators in the LLM-as-a-Judge paradigm, offering significant efficiency and flexibility compared to human judgments. However, previous methods primarily rely on single-point evaluations, overlooking the inherent diversity and uncertainty in human evaluations. This approach leads to information loss and decreases the reliability of evaluations. To address this limitation, we propose a novel training framework that explicitly aligns the LLM-generated judgment distribution with empirical human distributions. Specifically, we propose a distributional alignment objective based on KL divergence, combined with an auxiliary cross-entropy regularization to stabilize the training process. Furthermore, considering that empirical distributions may derive from limited human annotations, we incorporate adversarial training to enhance model robustness against distribution perturbations. Extensive experiments across various LLM backbones and evaluation tasks demonstrate that our framework significantly outperforms existing closed-source LLMs and conventional single-point alignment methods, with improved alignment quality, evaluation accuracy, and robustness.
Similar Papers
Aligning Black-box Language Models with Human Judgments
Computation and Language
Makes AI judges agree with people better.
On Evaluating LLM Alignment by Evaluating LLMs as Judges
Computation and Language
Tests if AI follows your wishes without reading its answers.
Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications
Computation and Language
Helps computers judge writing better than people.