Reasoning Distillation for Lightweight Automated Program Repair
By: Aanand Balasubramanian, Sashank Silwal
Potential Business Impact:
Fixes computer bugs better with smarter thinking.
We study whether lightweight symbolic reasoning supervision can improve fix type classification in compact automated program repair models. Small code models are attractive for resource-constrained settings, but they typically produce only a single prediction, making it unclear whether they learn meaningful program structure or rely on shallow correlations. We propose a reasoning distillation approach in which a large teacher model provides structured symbolic reasoning tags alongside fix-type labels. These tags capture high-level causal properties of bugs without relying on free-form explanations. We train a CodeT5-based student model under label-only and reasoning-distilled settings on the IntroClass benchmark. Reasoning supervision consistently improves macro averaged performance, particularly on less frequent bug categories, without increasing model size or complexity. We further analyze the relationship between reasoning accuracy and fix-type prediction, showing that correct reasoning traces strongly correlate with correct predictions, while not fully determining them. Our results suggest that symbolic reasoning distillation is a practical way to improve interpretability and robustness in lightweight program repair models.
Similar Papers
Reasoning Distillation and Structural Alignment for Improved Code Generation
Artificial Intelligence
Teaches small computers to solve hard coding problems.
Towards Understanding Distilled Reasoning Models: A Representational Approach
Machine Learning (CS)
Teaches AI to think smarter and check its work.
Skill-Aware Data Selection and Fine-Tuning for Data-Efficient Reasoning Distillation
Computation and Language
Teaches computers to solve math problems faster.