Synthetic Error Injection Fails to Elicit Self-Correction In Language Models
By: David X. Wu , Shreyas Kapur , Anant Sahai and more
Potential Business Impact:
Teaching computers to fix their own mistakes failed.
Reinforcement learning has become the dominant paradigm for eliciting reasoning and self-correction capabilities in large language models, but its computational expense motivates exploration of alternatives. Inspired by techniques from autonomous driving and robotics, we investigate whether supervised learning with synthetic error injection can induce self-correction abilities in language models. Our approach inserts artificial errors into reasoning chains, masks them, and supervises the model to recognize and correct these mistakes. Despite the intuitive appeal of this method, we find that it fails to significantly improve performance even on simple synthetic tasks across multiple models. Moreover, even when the model catches its own error, it often parrots the original mistake. We find that the distribution shift of synthetic errors to on-policy errors significantly degrades the error-correction capabilities of the fine-tuned model, even with good synthetic coverage of on-policy errors. Our results help explain why on-policy reinforcement learning methods have proven uniquely effective for eliciting self-correction.
Similar Papers
Language Models can perform Single-Utterance Self-Correction of Perturbed Reasoning
Computation and Language
Computers can fix their own math mistakes.
Language Self-Play For Data-Free Training
Artificial Intelligence
Computers learn to be smarter by playing games.
Natural Emergent Misalignment from Reward Hacking in Production RL
Artificial Intelligence
Teaches AI to cheat, then fixes it.