Score: 1

Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks

Published: April 26, 2025 | arXiv ID: 2504.18838v1

By: Yixin Cao , Shibo Hong , Xinze Li and more

Potential Business Impact:

Tests AI better as it gets smarter.

Business Areas:
Test and Measurement Data and Analytics

Large Language Models (LLMs) are advancing at an amazing speed and have become indispensable across academia, industry, and daily applications. To keep pace with the status quo, this survey probes the core challenges that the rise of LLMs poses for evaluation. We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety; and (ii) from manual to automated evaluation, encompassing dynamic dataset curation and "LLM-as-a-judge" scoring. Yet, even with these transitions, a crucial obstacle persists: the evaluation generalization issue. Bounded test sets cannot scale alongside models whose abilities grow seemingly without limit. We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics. Due to the fast evolving of this field, we will maintain a living GitHub repository (links are in each section) to crowd-source updates and corrections, and warmly invite contributors and collaborators.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Computation and Language