R-Horizon: How Far Can Your Large Reasoning Model Really Go in Breadth and Depth?
By: Yi Lu , Jianing Wang , Linsen Guo and more
Potential Business Impact:
Teaches computers to solve harder, longer problems.
Recent trends in test-time scaling for reasoning models (e.g., OpenAI o1, DeepSeek-R1) have led to remarkable improvements through long Chain-of-Thought (CoT). However, existing benchmarks mainly focus on immediate, single-horizon tasks, failing to adequately evaluate models' ability to understand and respond to complex, long-horizon scenarios. To address this incomplete evaluation of Large Reasoning Models (LRMs), we propose R-HORIZON, a method designed to stimulate long-horizon reasoning behaviors in LRMs through query composition. Based on R-HORIZON, we construct a long-horizon reasoning benchmark, comprising complex multi-step reasoning tasks with interdependent problems that span long reasoning horizons. Through comprehensive evaluation of LRMs using the R-HORIZON benchmark, we find that even the most advanced LRMs suffer significant performance degradation. Our analysis reveals that LRMs exhibit limited effective reasoning length and struggle to allocate thinking budget across multiple problems appropriately. Recognizing these limitations, we use R-HORIZON to construct long-horizon reasoning data for reinforcement learning with verified rewards (RLVR). Compared to training with single-horizon data, RLVR with R-HORIZON not only substantially improves performance on the multi-horizon reasoning tasks, but also promotes accuracy on standard reasoning tasks, with an increase of 7.5 on AIME2024. These results position R-HORIZON as a scalable, controllable, and low-cost paradigm for enhancing and evaluating the long-horizon reasoning capabilities of LRMs.
Similar Papers
h1: Bootstrapping LLMs to Reason over Longer Horizons via Reinforcement Learning
Machine Learning (CS)
Teaches computers to solve harder math problems.
Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving
Computation and Language
Solves super hard math problems by thinking step-by-step.
Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models
Artificial Intelligence
Makes AI think faster without losing accuracy.