AutoMetrics: Approximate Human Judgements with Automatically Generated Evaluators
By: Michael J. Ryan , Yanzhe Zhang , Amol Salunkhe and more
Potential Business Impact:
Tests AI tools faster with less human help.
Evaluating user-facing AI applications remains a central challenge, especially in open-ended domains such as travel planning, clinical note generation, or dialogue. The gold standard is user feedback (e.g., thumbs up/down) or behavioral signals (e.g., retention), but these are often scarce in prototypes and research projects, or too-slow to use for system optimization. We present AutoMetrics, a framework for synthesizing evaluation metrics under low-data constraints. AutoMetrics combines retrieval from MetricBank, a collection of 48 metrics we curate, with automatically generated LLM-as-a-Judge criteria informed by lightweight human feedback. These metrics are composed via regression to maximize correlation with human signal. AutoMetrics takes you from expensive measures to interpretable automatic metrics. Across 5 diverse tasks, AutoMetrics improves Kendall correlation with human ratings by up to 33.4% over LLM-as-a-Judge while requiring fewer than 100 feedback points. We show that AutoMetrics can be used as a proxy reward to equal effect as a verifiable reward. We release the full AutoMetrics toolkit and MetricBank to accelerate adaptive evaluation of LLM applications.
Similar Papers
AutoLibra: Agent Metric Induction from Open-Ended Feedback
Artificial Intelligence
Teaches AI to learn from human feedback better.
The illusion of a perfect metric: Why evaluating AI's words is harder than it looks
Computation and Language
Helps AI write better by checking its work.
AutoBench: Automating LLM Evaluation through Reciprocal Peer Assessment
Computation and Language
Tests AI language skills better than old ways.