Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks
By: Yixin Cao , Shibo Hong , Xinze Li and more
Potential Business Impact:
Tests AI better as it gets smarter.
Large Language Models (LLMs) are advancing at an amazing speed and have become indispensable across academia, industry, and daily applications. To keep pace with the status quo, this survey probes the core challenges that the rise of LLMs poses for evaluation. We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety; and (ii) from manual to automated evaluation, encompassing dynamic dataset curation and "LLM-as-a-judge" scoring. Yet, even with these transitions, a crucial obstacle persists: the evaluation generalization issue. Bounded test sets cannot scale alongside models whose abilities grow seemingly without limit. We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics. Due to the fast evolving of this field, we will maintain a living GitHub repository (links are in each section) to crowd-source updates and corrections, and warmly invite contributors and collaborators.
Similar Papers
Beyond Next Word Prediction: Developing Comprehensive Evaluation Frameworks for measuring LLM performance on real world applications
Computation and Language
Tests AI on many tasks, not just one.
A Practical Guide for Evaluating LLMs and LLM-Reliant Systems
Artificial Intelligence
Tests AI language tools for real-world use.
Domain Specific Benchmarks for Evaluating Multimodal Large Language Models
Machine Learning (CS)
Organizes AI tests for different subjects.