Score: 1

Logits Replay + MoClip: Stabilized, Low-Cost Post-Training with Minimal Forgetting

Published: October 10, 2025 | arXiv ID: 2510.09152v1

By: Suming Qiu , Jing Li , Zhicheng Zhou and more

BigTech Affiliations: Huawei

Potential Business Impact:

Keeps AI smart while teaching it new skills.

Business Areas:
A/B Testing Data and Analytics

Large language models (LLMs) often face a trade-off in post-training: improvements on specialized domains frequently come at the expense of general capabilities. Existing solutions attempt to mitigate this tension via regularization, selective parameter updates, or data-centric replay, but each imposes significant costs in computation, data access, or adaptability. Recent work has shown that training signals can be compressed to subsets of logits without severe accuracy loss, suggesting a path toward efficient adaptation. However, naive truncation destabilizes optimization and exacerbates forgetting. We introduce Logits Replay + MoClip, a two-stage framework that compresses supervision in the logit space and stabilizes optimization at the update level. In Stage 0, we record dynamic Top-K token subsets that cover a probability threshold, always including the gold label. In Stage 1, we replay these compact subsets to compute exact renormalized losses, avoiding full softmax computation and implicitly regularizing. To ensure stability, we design MoClip, an optimizer that caps gradient-momentum rotation and applies an arctan2-based rescaling of updates. Empirically, our method improves domain performance on Communication Technology (CT) and NL2SQL tasks while mitigating forgetting on general benchmarks (MMLU, BBH, GPQA, MATH), and reduces training cost by over 40%. Together, these contributions offer a scalable, architecture-agnostic path for domain adaptation of LLMs without sacrificing generalization.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)