Circular Reasoning: Understanding Self-Reinforcing Loops in Large Reasoning Models
By: Zenghao Duan , Liang Pang , Zihao Wei and more
Potential Business Impact:
Stops smart computers from repeating themselves.
Despite the success of test-time scaling, Large Reasoning Models (LRMs) frequently encounter repetitive loops that lead to computational waste and inference failure. In this paper, we identify a distinct failure mode termed Circular Reasoning. Unlike traditional model degeneration, this phenomenon manifests as a self-reinforcing trap where generated content acts as a logical premise for its own recurrence, compelling the reiteration of preceding text. To systematically analyze this phenomenon, we introduce LoopBench, a dataset designed to capture two distinct loop typologies: numerical loops and statement loops. Mechanistically, we characterize circular reasoning as a state collapse exhibiting distinct boundaries, where semantic repetition precedes textual repetition. We reveal that reasoning impasses trigger the loop onset, which subsequently persists as an inescapable cycle driven by a self-reinforcing V-shaped attention mechanism. Guided by these findings, we employ the Cumulative Sum (CUSUM) algorithm to capture these precursors for early loop prediction. Experiments across diverse LRMs validate its accuracy and elucidate the stability of long-chain reasoning.
Similar Papers
Wait, Wait, Wait... Why Do Reasoning Models Loop?
Machine Learning (CS)
Fixes AI mistakes that make it repeat itself.
Incorporating Self-Rewriting into Large Language Model Reasoning Reinforcement
Computation and Language
Teaches computers to think better and faster.
Modeling Hierarchical Thinking in Large Reasoning Models
Artificial Intelligence
Helps computers think step-by-step like people.