The Ouroboros of Benchmarking: Reasoning Evaluation in an Era of Saturation
By: İbrahim Ethem Deveci, Duygu Ataman
Potential Business Impact:
Tests if smart computers can truly think.
The rapid rise of Large Language Models (LLMs) and Large Reasoning Models (LRMs) has been accompanied by an equally rapid increase of benchmarks used to assess them. However, due to both improved model competence resulting from scaling and novel training advances as well as likely many of these datasets being included in pre or post training data, results become saturated, driving a continuous need for new and more challenging replacements. In this paper, we discuss whether surpassing a benchmark truly demonstrates reasoning ability or are we simply tracking numbers divorced from the capabilities we claim to measure? We present an investigation focused on three model families, OpenAI, Anthropic, and Google, and how their reasoning capabilities across different benchmarks evolve over the years. We also analyze performance trends over the years across different reasoning tasks and discuss the current situation of benchmarking and remaining challenges. By offering a comprehensive overview of benchmarks and reasoning tasks, our work aims to serve as a first reference to ground future research in reasoning evaluation and model development.
Similar Papers
A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility
Machine Learning (CS)
Makes AI math tests fair and reliable.
RiddleBench: A New Generative Reasoning Benchmark for LLMs
Computation and Language
Tests AI's smart thinking, finds it struggles.
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.