Score: 0

The Valley of Code Reasoning: Scaling Knowledge Distillation of Large Language Models

Published: October 7, 2025 | arXiv ID: 2510.06101v1

By: Muyu He , Muhammad Ali Shafique , Anand Kumar and more

Potential Business Impact:

Makes small AI learn coding skills faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Distilling the thinking traces of a Large Language Model (LLM) with reasoning capabilities into a smaller model has been proven effective. Yet, there is a scarcity of work done on how model performances scale with the quantity of distillation data. In this work, we study the scaling trend of distilling competitive coding skills on two small non-reasoning LLMs. We validate the hypothesis that there is a $\textit{valley of code reasoning}$: downstream performance on competitive coding first drops as data quantity increases, then it steadily increases in a sharper-than-log-linear fashion. Having identified the trend, we further fine-tune the models at two different distillation stages on the same data to ground conclusions on their respective learning phases. We learn that across stages in the low and medium-low data regimes, small models benefit significantly from easier coding questions than from harder ones. We also find that, surprisingly, the correctness of outputs in training data makes no difference to distillation outcomes. Our work represents a step forward in understanding the training dynamics of code reasoning distillation outside intuition

Page Count
6 pages

Category
Computer Science:
Computation and Language