Benchmark-Driven Selection of AI: Evidence from DeepSeek-R1
By: Petr Spelda, Vit Stritecky
Potential Business Impact:
Teaches AI to think better using smart tests.
Evaluation of reasoning language models gained importance after it was observed that they can combine their existing capabilities into novel traces of intermediate steps before task completion and that the traces can sometimes help them to generalize better than past models. As reasoning becomes the next scaling dimension of large language models, careful study of their capabilities in critical tasks is needed. We show that better performance is not always caused by test-time algorithmic improvements or model sizes but also by using impactful benchmarks as curricula for learning. We call this benchmark-driven selection of AI and show its effects on DeepSeek-R1 using our sequential decision-making problem from Humanity's Last Exam. Steering development of AI by impactful benchmarks trades evaluation for learning and makes novelty of test tasks key for measuring generalization capabilities of reasoning models. Consequently, some benchmarks could be seen as curricula for training rather than unseen test sets.
Similar Papers
The Ouroboros of Benchmarking: Reasoning Evaluation in an Era of Saturation
Computation and Language
Tests if smart computers can truly think.
AI Benchmark Democratization and Carpentry
Artificial Intelligence
Makes AI tests stay fair as AI gets smarter.
A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports
Artificial Intelligence
Helps AI agents solve hard problems better.