Score: 2

Becoming Experienced Judges: Selective Test-Time Learning for Evaluators

Published: December 7, 2025 | arXiv ID: 2512.06751v1

By: Seungyeon Jwa , Daechul Ahn , Reokyoung Kim and more

Potential Business Impact:

Computers learn to judge better by practicing.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic evaluation with large language models, commonly known as LLM-as-a-judge, is now standard across reasoning and alignment tasks. Despite evaluating many samples in deployment, these evaluators typically (i) treat each case independently, missing the opportunity to accumulate experience, and (ii) rely on a single fixed prompt for all cases, neglecting the need for sample-specific evaluation criteria. We introduce Learning While Evaluating (LWE), a framework that allows evaluators to improve sequentially at inference time without requiring training or validation sets. LWE maintains an evolving meta-prompt that (i) produces sample-specific evaluation instructions and (ii) refines itself through self-generated feedback. Furthermore, we propose Selective LWE, which updates the meta-prompt only on self-inconsistent cases, focusing computation where it matters most. This selective approach retains the benefits of sequential learning while being far more cost-effective. Across two pairwise comparison benchmarks, Selective LWE outperforms strong baselines, empirically demonstrating that evaluators can improve during sequential testing with a simple selective update, learning most from the cases they struggle with.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡°πŸ‡· United States, Korea, Republic of

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Computation and Language