Towards Concise and Adaptive Thinking in Large Reasoning Models: A Survey
By: Jason Zhu, Hongyu Li
Potential Business Impact:
Makes smart computer thinking faster and less wasteful.
Large reasoning models (LRMs) like OpenAI o1 and DeepSeek R1 have demonstrated impressive performance on complex reasoning tasks like mathematics and programming with long Chain-of-Thought (CoT) reasoning sequences (slow-thinking), compared with traditional large language models (fast-thinking). However, these reasoning models also face a huge challenge that generating unnecessarily lengthy and redundant reasoning chains even for trivial questions. This phenomenon leads to a significant waste of inference resources, increases the response time for simple queries, and hinders the practical application of LRMs in real-world products. To this end, it is crucial to shorten lengthy reasoning chains and learn adaptive reasoning between fast and slow thinking based on input difficulty. In this survey, we provide a comprehensive overview of recent progress in concise and adaptive thinking for efficient reasoning of LRMs, including methodologies, benchmarks, and challenges for future exploration. We hope this survey can help researchers quickly understand the landscape of this field and inspire novel adaptive thinking ideas to facilitate better usage of LRMs.
Similar Papers
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Computation and Language
Makes smart computer programs think faster, not waste words.
Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models
Artificial Intelligence
Makes AI think faster without losing accuracy.
Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models
Artificial Intelligence
Makes computers think deeper to solve hard problems.