THINK-Bench: Evaluating Thinking Efficiency and Chain-of-Thought Quality of Large Reasoning Models
By: Zhiyuan Li, Yi Chang, Yuan Wu
Potential Business Impact:
Makes smart computers think less on easy problems.
Large reasoning models (LRMs) have achieved impressive performance in complex tasks, often outperforming conventional large language models (LLMs). However, the prevalent issue of overthinking severely limits their computational efficiency. Overthinking occurs when models generate excessive and redundant tokens that contribute little to accurate outcomes, especially in simple tasks, resulting in a significant waste of computational resources. To systematically investigate this issue, we introduce Think-Bench, a benchmark designed to evaluate the reasoning efficiency of LRMs. We also propose novel efficiency metrics and conduct a comprehensive evaluation of various LRMs across multiple dimensions, including the reasoning process, outcome quality, and chain-of-thought (CoT) characteristics. Our analysis reveals that most LRMs exhibit overthinking in handling easy questions, generating unnecessarily lengthy reasoning chains. While many LRMs demonstrate high CoT quality, several suffer from low efficiency. We hope that Think-Bench can serve as a robust foundation for advancing research into LRMs.
Similar Papers
ReEfBench: Quantifying the Reasoning Efficiency of LLMs
Artificial Intelligence
Finds if AI truly reasons or just talks a lot.
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Computation and Language
Makes smart computer programs think faster, not waste words.
Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models
Artificial Intelligence
Makes AI think smarter, faster, and more accurately.