Becoming Experienced Judges: Selective Test-Time Learning for Evaluators
By: Seungyeon Jwa , Daechul Ahn , Reokyoung Kim and more
Potential Business Impact:
Computers learn to judge better by practicing.
Automatic evaluation with large language models, commonly known as LLM-as-a-judge, is now standard across reasoning and alignment tasks. Despite evaluating many samples in deployment, these evaluators typically (i) treat each case independently, missing the opportunity to accumulate experience, and (ii) rely on a single fixed prompt for all cases, neglecting the need for sample-specific evaluation criteria. We introduce Learning While Evaluating (LWE), a framework that allows evaluators to improve sequentially at inference time without requiring training or validation sets. LWE maintains an evolving meta-prompt that (i) produces sample-specific evaluation instructions and (ii) refines itself through self-generated feedback. Furthermore, we propose Selective LWE, which updates the meta-prompt only on self-inconsistent cases, focusing computation where it matters most. This selective approach retains the benefits of sequential learning while being far more cost-effective. Across two pairwise comparison benchmarks, Selective LWE outperforms strong baselines, empirically demonstrating that evaluators can improve during sequential testing with a simple selective update, learning most from the cases they struggle with.
Similar Papers
Learning an Efficient Multi-Turn Dialogue Evaluator from Multiple Judges
Computation and Language
Grades AI chats fast using one smart judge
Learning an Efficient Multi-Turn Dialogue Evaluator from Multiple Judges
Computation and Language
Judges AI chats fast and cheaply
Who Judges the Judge? LLM Jury-on-Demand: Building Trustworthy LLM Evaluation Systems
Artificial Intelligence
Makes AI judges more trustworthy for important jobs.