Score: 1

Benchmark^2: Systematic Evaluation of LLM Benchmarks

Published: January 7, 2026 | arXiv ID: 2601.03986v1

By: Qi Qian , Chengsong Huang , Jingwen Xu and more

Potential Business Impact:

Tests if AI tests are good and fair.

Business Areas:
Test and Measurement Data and Analytics

The rapid proliferation of benchmarks for evaluating large language models (LLMs) has created an urgent need for systematic methods to assess benchmark quality itself. We propose Benchmark^2, a comprehensive framework comprising three complementary metrics: (1) Cross-Benchmark Ranking Consistency, measuring whether a benchmark produces model rankings aligned with peer benchmarks; (2) Discriminability Score, quantifying a benchmark's ability to differentiate between models; and (3) Capability Alignment Deviation, identifying problematic instances where stronger models fail but weaker models succeed within the same model family. We conduct extensive experiments across 15 benchmarks spanning mathematics, reasoning, and knowledge domains, evaluating 11 LLMs across four model families. Our analysis reveals significant quality variations among existing benchmarks and demonstrates that selective benchmark construction based on our metrics can achieve comparable evaluation performance with substantially reduced test sets.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ United States, China

Page Count
14 pages

Category
Computer Science:
Computation and Language