Large Reasoning Models are not thinking straight: on the unreliability of thinking trajectories
By: Jhouben Cuesta-Ramirez, Samuel Beaussant, Mehdi Mounsif
Potential Business Impact:
Models get stuck thinking too much, ignore right answers.
Large Language Models (LLMs) trained via Reinforcement Learning (RL) have recently achieved impressive results on reasoning benchmarks. Yet, growing evidence shows that these models often generate longer but ineffective chains of thought (CoTs), calling into question whether benchmark gains reflect real reasoning improvements. We present new evidence of overthinking, where models disregard correct solutions even when explicitly provided, instead continuing to generate unnecessary reasoning steps that often lead to incorrect conclusions. Experiments on three state-of-the-art models using the AIME2024 math benchmark reveal critical limitations in these models ability to integrate corrective information, posing new challenges for achieving robust and interpretable reasoning.
Similar Papers
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Computation and Language
Makes smart computer programs think faster, not waste words.
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
Artificial Intelligence
Computers learn to think step-by-step for harder problems.
Interleaved Reasoning for Large Language Models via Reinforcement Learning
Computation and Language
Makes smart computers answer questions faster.