Interleaved Reasoning for Large Language Models via Reinforcement Learning
By: Roy Xie , David Qiu , Deepak Gopinath and more
Potential Business Impact:
Makes smart computers answer questions faster.
Long chain-of-thought (CoT) significantly enhances large language models' (LLM) reasoning capabilities. However, the extensive reasoning traces lead to inefficiencies and an increased time-to-first-token (TTFT). We propose a novel training paradigm that uses reinforcement learning (RL) to guide reasoning LLMs to interleave thinking and answering for multi-hop questions. We observe that models inherently possess the ability to perform interleaved reasoning, which can be further enhanced through RL. We introduce a simple yet effective rule-based reward to incentivize correct intermediate steps, which guides the policy model toward correct reasoning paths by leveraging intermediate signals generated during interleaved reasoning. Extensive experiments conducted across five diverse datasets and three RL algorithms (PPO, GRPO, and REINFORCE++) demonstrate consistent improvements over traditional think-answer reasoning, without requiring external tools. Specifically, our approach reduces TTFT by over 80% on average and improves up to 19.3% in Pass@1 accuracy. Furthermore, our method, trained solely on question answering and logical reasoning datasets, exhibits strong generalization ability to complex reasoning datasets such as MATH, GPQA, and MMLU. Additionally, we conduct in-depth analysis to reveal several valuable insights into conditional reward modeling.
Similar Papers
Reasoning Under 1 Billion: Memory-Augmented Reinforcement Learning for Large Language Models
Machine Learning (CS)
Helps small AI learn to think better.
Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning
Computation and Language
Makes AI think smarter, not longer.
Adaptive Deep Reasoning: Triggering Deep Thinking When Needed
Computation and Language
Smart AI picks short or long thinking for answers.