InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models
By: Yuchen Yan , Yongliang Shen , Yang Liu and more
Potential Business Impact:
Lets computers think through long problems without getting tired.
Advanced reasoning in large language models has achieved remarkable performance on challenging tasks, but the prevailing long-context reasoning paradigm faces critical limitations: quadratic computational scaling with sequence length, reasoning constrained by maximum context boundaries, and performance degradation beyond pre-training context windows. Existing approaches primarily compress reasoning chains without addressing the fundamental scaling problem. To overcome these challenges, we introduce InftyThink, a paradigm that transforms monolithic reasoning into an iterative process with intermediate summarization. By interleaving short reasoning segments with concise progress summaries, our approach enables unbounded reasoning depth while maintaining bounded computational costs. This creates a characteristic sawtooth memory pattern that significantly reduces computational complexity compared to traditional approaches. Furthermore, we develop a methodology for reconstructing long-context reasoning datasets into our iterative format, transforming OpenR1-Math into 333K training instances. Experiments across multiple model architectures demonstrate that our approach reduces computational costs while improving performance, with Qwen2.5-Math-7B showing 3-13% improvements across MATH500, AIME24, and GPQA_diamond benchmarks. Our work challenges the assumed trade-off between reasoning depth and computational efficiency, providing a more scalable approach to complex reasoning without architectural modifications.
Similar Papers
InfiniteICL: Breaking the Limit of Context Window Size via Long Short-term Memory Transformation
Computation and Language
Lets computers remember much more information.
Scaling Reasoning can Improve Factuality in Large Language Models
Computation and Language
Makes computers answer questions more accurately.
Scalable Chain of Thoughts via Elastic Reasoning
Machine Learning (CS)
Lets AI finish tasks with less thinking time.