Answer Matching Outperforms Multiple Choice for Language Model Evaluation
By: Nikhil Chandak , Shashwat Goel , Ameya Prabhu and more
Potential Business Impact:
Tests AI better by having it write answers.
Multiple choice benchmarks have long been the workhorse of language model evaluation because grading multiple choice is objective and easy to automate. However, we show multiple choice questions from popular benchmarks can often be answered without even seeing the question. These shortcuts arise from a fundamental limitation of discriminative evaluation not shared by evaluations of the model's free-form, generative answers. Until recently, there appeared to be no viable, scalable alternative to multiple choice--but, we show that this has changed. We consider generative evaluation via what we call answer matching: Give the candidate model the question without the options, have it generate a free-form response, then use a modern language model with the reference answer to determine if the response matches the reference. To compare the validity of different evaluation strategies, we annotate MMLU-Pro and GPQA-Diamond to obtain human grading data, and measure the agreement of each evaluation approach. We find answer matching using recent models--even small ones--achieves near-perfect agreement, in the range of inter-annotator agreement. In contrast, both multiple choice evaluation and using LLM-as-a-judge without reference answers aligns poorly with human grading. Improving evaluations via answer matching is not merely a conceptual concern: the rankings of several models change significantly when evaluating their free-form responses with answer matching. In light of these findings, we discuss how to move the evaluation ecosystem from multiple choice to answer matching.
Similar Papers
Right Answer, Wrong Score: Uncovering the Inconsistencies of LLM Evaluation in Multiple-Choice Question Answering
Computation and Language
Makes AI answers more honest and fair.
Reasoning Models are Test Exploiters: Rethinking Multiple-Choice
Computation and Language
Tests make smart computers seem smarter than they are.
It is Too Many Options: Pitfalls of Multiple-Choice Questions in Generative AI and Medical Education
Computation and Language
Shows AI is not as smart as we thought.