Score: 1

Agentic Reinforcement Learning for Real-World Code Repair

Published: October 24, 2025 | arXiv ID: 2510.22075v1

By: Siyu Zhu , Anastasiya Karpovich , Albert Chen and more

BigTech Affiliations: LinkedIn

Potential Business Impact:

Fixes computer code automatically and reliably.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We tackle the challenge of training reliable code-fixing agents in real repositories, where complex builds and shifting dependencies make evaluation unstable. We developed a verifiable pipeline with success defined as post-fix build validation and improved reproducibility across ~1K real issues by pinning dependencies and disabling automatic upgrades. Building on this, we introduced a scalable simplified pipeline for large-scale reinforcement learning (RL). Using this setup, we supervised fine-tuned Qwen3-32B in the full pipeline and applied RL on top of the SFT model in the simplified environment. The SFT model distilled from GPT-4.1 trajectories performs on par while being 56x smaller, and RL added 7-20% absolute gains under matched train-test conditions. "Thinking mode" was on par or worse in our experiments. Both SFT and RL models failed to generalize across environments, highlighting the importance of matching train-test environments for building reliable real-world code-fixing agents.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)