Score: 0

Reasoning Distillation for Lightweight Automated Program Repair

Published: January 16, 2026 | arXiv ID: 2601.10987v1

By: Aanand Balasubramanian, Sashank Silwal

Potential Business Impact:

Fixes computer bugs better with smarter thinking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We study whether lightweight symbolic reasoning supervision can improve fix type classification in compact automated program repair models. Small code models are attractive for resource-constrained settings, but they typically produce only a single prediction, making it unclear whether they learn meaningful program structure or rely on shallow correlations. We propose a reasoning distillation approach in which a large teacher model provides structured symbolic reasoning tags alongside fix-type labels. These tags capture high-level causal properties of bugs without relying on free-form explanations. We train a CodeT5-based student model under label-only and reasoning-distilled settings on the IntroClass benchmark. Reasoning supervision consistently improves macro averaged performance, particularly on less frequent bug categories, without increasing model size or complexity. We further analyze the relationship between reasoning accuracy and fix-type prediction, showing that correct reasoning traces strongly correlate with correct predictions, while not fully determining them. Our results suggest that symbolic reasoning distillation is a practical way to improve interpretability and robustness in lightweight program repair models.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)