Training-Trajectory-Aware Token Selection
By: Zhanming Shen , Jiaqi Hu , Zeyu Qin and more
Efficient distillation is a key pathway for converting expensive reasoning capability into deployable efficiency, yet in the frontier regime where the student already has strong reasoning ability, naive continual distillation often yields limited gains or even degradation. We observe a characteristic training phenomenon: even as loss decreases monotonically, all performance metrics can drop sharply at almost the same bottleneck, before gradually recovering. We further uncover a token-level mechanism: confidence bifurcates into steadily increasing Imitation-Anchor Tokens that quickly anchor optimization and other yet-to-learn tokens whose confidence is suppressed until after the bottleneck. And the characteristic that these two types of tokens cannot coexist is the root cause of the failure in continual distillation. To this end, we propose Training-Trajectory-Aware Token Selection (T3S) to reconstruct the training objective at the token level, clearing the optimization path for yet-to-learn tokens. T3 yields consistent gains in both AR and dLLM settings: with only hundreds of examples, Qwen3-8B surpasses DeepSeek-R1 on competitive reasoning benchmarks, Qwen3-32B approaches Qwen3-235B, and T3-trained LLaDA-2.0-Mini exceeds its AR baseline, achieving state-of-the-art performance among all of 16B-scale no-think models.
Similar Papers
Distilling the Essence: Efficient Reasoning Distillation via Sequence Truncation
Computation and Language
Makes AI learn faster by focusing on thinking steps.
LLM-Oriented Token-Adaptive Knowledge Distillation
Computation and Language
Makes AI learn better by focusing on hard parts.
Learning to Focus: Causal Attention Distillation via Gradient-Guided Token Pruning
Computation and Language
Helps AI focus on important information, not distractions.