OptimalThinkingBench: Evaluating Over and Underthinking in LLMs
By: Pranjal Aggarwal , Seungone Kim , Jack Lanchantin and more
Potential Business Impact:
Helps AI think just right, not too much.
Thinking LLMs solve complex tasks at the expense of increased compute and overthinking on simpler problems, while non-thinking LLMs are faster and cheaper but underthink on harder reasoning problems. This has led to the development of separate thinking and non-thinking LLM variants, leaving the onus of selecting the optimal model for each query on the end user. In this work, we introduce OptimalThinkingBench, a unified benchmark that jointly evaluates overthinking and underthinking in LLMs and also encourages the development of optimally-thinking models that balance performance and efficiency. Our benchmark comprises two sub-benchmarks: OverthinkingBench, featuring simple queries in 72 domains, and UnderthinkingBench, containing 11 challenging reasoning tasks. Using novel thinking-adjusted accuracy metrics, we perform extensive evaluation of 33 different thinking and non-thinking models and show that no model is able to optimally think on our benchmark. Thinking models often overthink for hundreds of tokens on the simplest user queries without improving performance. In contrast, large non-thinking models underthink, often falling short of much smaller thinking models. We further explore several methods to encourage optimal thinking, but find that these approaches often improve on one sub-benchmark at the expense of the other, highlighting the need for better unified and optimal models in the future.
Similar Papers
Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
Computation and Language
Makes smart computers think deeper to solve hard math.
Do LLMs Really Need 10+ Thoughts for "Find the Time 1000 Days Later"? Towards Structural Understanding of LLM Overthinking
Computation and Language
Stops computers from thinking too much.
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Computation and Language
Makes AI give shorter, more accurate answers.