Score: 0

Style over Substance: Distilled Language Models Reason Via Stylistic Replication

Published: April 2, 2025 | arXiv ID: 2504.01738v3

By: Philip Lippmann, Jie Yang

Potential Business Impact:

Teaches computers to think by copying how we write thoughts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Specialized reasoning language models (RLMs) have demonstrated that scaling test-time computation through detailed reasoning traces significantly enhances performance. Although these traces effectively facilitate knowledge distillation into smaller, instruction-tuned models, the precise nature of transferred reasoning remains unclear. In this study, we investigate to what extent distilled models internalize replicated stylistic patterns during reasoning. To this end, we systematically analyze reasoning traces, identifying structural and lexical patterns that characterize successful reasoning. We then introduce two new datasets -- a dataset of emergent reasoning traces and a synthetic dataset explicitly constructed to replicate these stylistic patterns -- to precisely examine their influence on distilled models' reasoning capabilities. We find that models trained on the synthetic traces achieve comparable performance, indicating that distilled reasoning abilities rely significantly on surface-level patterns. Surprisingly, we observe an increase in performance even when the synthetic traces are altered to lead to the wrong answer. Our findings highlight how stylistic patterns can be leveraged to efficiently enhance LM reasoning across diverse model families.

Page Count
18 pages

Category
Computer Science:
Computation and Language