Reliable and Efficient Amortized Model-based Evaluation
By: Sang Truong , Yuheng Tu , Percy Liang and more
Potential Business Impact:
Tests AI faster and more accurately.
Comprehensive evaluations of language models (LM) during both development and deployment phases are necessary because these models possess numerous capabilities (e.g., mathematical reasoning, legal support, or medical diagnostic) as well as safety risks (e.g., racial bias, toxicity, or misinformation). The average score across a wide range of benchmarks provides a signal that helps guide the use of these LMs in practice. Currently, holistic evaluations are costly due to the large volume of benchmark questions, making frequent evaluations impractical. A popular attempt to lower the cost is to compute the average score on a subset of the benchmark. This approach, unfortunately, often renders an unreliable measure of LM performance because the average score is often confounded with the difficulty of the questions in the benchmark subset. Item response theory (IRT) was designed to address this challenge, providing a reliable measurement by careful controlling for question difficulty. Unfortunately, question difficulty is expensive to estimate. Facing this challenge, we train a model that predicts question difficulty from its content, enabling a reliable measurement at a fraction of the cost. In addition, we leverage this difficulty predictor to further improve the evaluation efficiency through training a question generator given a difficulty level. This question generator is essential in adaptive testing, where, instead of using a random subset of the benchmark questions, informative questions are adaptively chosen based on the current estimation of LLM performance. Experiments on 22 common natural language benchmarks and 172 LMs show that this approach is more reliable and efficient compared to current common practice.
Similar Papers
On Robustness and Reliability of Benchmark-Based Evaluation of LLMs
Computation and Language
Tests make smart computers seem less smart.
Fluid Language Model Benchmarking
Computation and Language
Tests AI smarter, faster, and more accurately.
SCORE: Systematic COnsistency and Robustness Evaluation for Large Language Models
Computation and Language
Tests AI to see if it's reliable.