Score: 0

ToDi: Token-wise Distillation via Fine-Grained Divergence Control

Published: May 22, 2025 | arXiv ID: 2505.16297v1

By: Seongryong Jung , Suwan Yoon , DongGeon Kim and more

Potential Business Impact:

Makes big AI models work on small devices.

Business Areas:
Text Analytics Data and Analytics, Software

Large language models (LLMs) offer impressive performance but are impractical for resource-constrained deployment due to high latency and energy consumption. Knowledge distillation (KD) addresses this by transferring knowledge from a large teacher to a smaller student model. However, conventional KD, notably approaches like Forward KL (FKL) and Reverse KL (RKL), apply uniform divergence loss across the entire vocabulary, neglecting token-level prediction discrepancies. By investigating these representative divergences via gradient analysis, we reveal that FKL boosts underestimated tokens, while RKL suppresses overestimated ones, showing their complementary roles. Based on this observation, we propose Token-wise Distillation (ToDi), a novel method that adaptively combines FKL and RKL per token using a sigmoid-based weighting function derived from the teacher-student probability log-ratio. ToDi dynamically emphasizes the appropriate divergence for each token, enabling precise distribution alignment. We demonstrate that ToDi consistently outperforms recent distillation baselines using uniform or less granular strategies across instruction-following benchmarks. Extensive ablation studies and efficiency analysis further validate ToDi's effectiveness and practicality.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
13 pages

Category
Computer Science:
Computation and Language