Scaling Up Active Testing to Large Language Models
By: Gabrielle Berrada , Jannik Kossen , Muhammed Razzak and more
Potential Business Impact:
Tests big computer brains better with less work.
Active testing enables label-efficient evaluation of models through careful data acquisition. However, its significant computational costs have previously undermined its use for large models. We show how it can be successfully scaled up to the evaluation of large language models (LLMs). In particular we show that the surrogate model used to guide data acquisition can be constructed cheaply using in-context learning, does not require updating within an active-testing loop, and can be smaller than the target model. We even find we can make good data-acquisition decisions without computing predictions with the target model and further introduce a single-run error estimator to asses how well active testing is working on the fly. We find that our approach is able to more effectively evaluate LLM performance with less data than current standard practices.
Similar Papers
InFerActive: Towards Scalable Human Evaluation of Large Language Models through Interactive Inference
Human-Computer Interaction
Helps people check AI writing faster.
LAUD: Integrating Large Language Models with Active Learning for Unlabeled Data
Machine Learning (CS)
Teaches computers to learn from less data.
Learning Facts at Scale with Active Reading
Computation and Language
Teaches computers to learn facts better.