Training Reasoning Models on Saturated Problems via Failure-Prefix Conditioning
By: Minwu Kim, Safal Shrestha, Keith Ross
Potential Business Impact:
Teaches AI to learn from its mistakes better.
Reinforcement Learning with Verifiable Rewards (RLVR) has substantially improved the reasoning abilities of large language models (LLMs), yet training often stalls as problems become saturated. We identify the core challenge as the poor accessibility of informative failures: learning signals exist but are rarely encountered during standard rollouts. To address this, we propose failure-prefix conditioning, a simple and effective method for learning from saturated problems. Rather than starting from the original question, our approach reallocates exploration by conditioning training on prefixes derived from rare incorrect reasoning trajectories, thereby exposing the model to failure-prone states. We observe that failure-prefix conditioning yields performance gains matching those of training on medium-difficulty problems, while preserving token efficiency. Furthermore, we analyze the model's robustness, finding that our method reduces performance degradation under misleading failure prefixes, albeit with a mild trade-off in adherence to correct early reasoning. Finally, we demonstrate that an iterative approach, which refreshes failure prefixes during training, unlocks additional gains after performance plateaus. Overall, our results suggest that failure-prefix conditioning offers an effective pathway to extend RLVR training on saturated problems.
Similar Papers
Reuse your FLOPs: Scaling RL on Hard Problems by Conditioning on Very Off-Policy Prefixes
Machine Learning (CS)
Teaches computers to solve hard problems faster.
Generalization of RLVR Using Causal Reasoning as a Testbed
Machine Learning (CS)
Teaches AI to reason better with harder problems.
The Reasoning Boundary Paradox: How Reinforcement Learning Constrains Language Models
Artificial Intelligence
Fixes AI reasoning errors by focusing on hard problems.