Style over Substance: Distilled Language Models Reason Via Stylistic Replication
By: Philip Lippmann, Jie Yang
Potential Business Impact:
Teaches computers to think by copying how we write thoughts.
Specialized reasoning language models (RLMs) have demonstrated that scaling test-time computation through detailed reasoning traces significantly enhances performance. Although these traces effectively facilitate knowledge distillation into smaller, instruction-tuned models, the precise nature of transferred reasoning remains unclear. In this study, we investigate to what extent distilled models internalize replicated stylistic patterns during reasoning. To this end, we systematically analyze reasoning traces, identifying structural and lexical patterns that characterize successful reasoning. We then introduce two new datasets -- a dataset of emergent reasoning traces and a synthetic dataset explicitly constructed to replicate these stylistic patterns -- to precisely examine their influence on distilled models' reasoning capabilities. We find that models trained on the synthetic traces achieve comparable performance, indicating that distilled reasoning abilities rely significantly on surface-level patterns. Surprisingly, we observe an increase in performance even when the synthetic traces are altered to lead to the wrong answer. Our findings highlight how stylistic patterns can be leveraged to efficiently enhance LM reasoning across diverse model families.
Similar Papers
Towards Understanding Distilled Reasoning Models: A Representational Approach
Machine Learning (CS)
Teaches AI to think smarter and check its work.
SDRT: Enhance Vision-Language Models by Self-Distillation with Diverse Reasoning Traces
CV and Pattern Recognition
Teaches computers to "think" better with pictures.
Hán Dān Xué Bù (Mimicry) or Qīng Chū Yú Lán (Mastery)? A Cognitive Perspective on Reasoning Distillation in Large Language Models
Computation and Language
Makes AI think smarter, not just copy words.