Score: 2

AutoMetrics: Approximate Human Judgements with Automatically Generated Evaluators

Published: December 19, 2025 | arXiv ID: 2512.17267v1

By: Michael J. Ryan , Yanzhe Zhang , Amol Salunkhe and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Tests AI tools faster with less human help.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Evaluating user-facing AI applications remains a central challenge, especially in open-ended domains such as travel planning, clinical note generation, or dialogue. The gold standard is user feedback (e.g., thumbs up/down) or behavioral signals (e.g., retention), but these are often scarce in prototypes and research projects, or too-slow to use for system optimization. We present AutoMetrics, a framework for synthesizing evaluation metrics under low-data constraints. AutoMetrics combines retrieval from MetricBank, a collection of 48 metrics we curate, with automatically generated LLM-as-a-Judge criteria informed by lightweight human feedback. These metrics are composed via regression to maximize correlation with human signal. AutoMetrics takes you from expensive measures to interpretable automatic metrics. Across 5 diverse tasks, AutoMetrics improves Kendall correlation with human ratings by up to 33.4% over LLM-as-a-Judge while requiring fewer than 100 feedback points. We show that AutoMetrics can be used as a proxy reward to equal effect as a verifiable reward. We release the full AutoMetrics toolkit and MetricBank to accelerate adaptive evaluation of LLM applications.

Country of Origin
🇺🇸 United States


Page Count
56 pages

Category
Computer Science:
Computation and Language