Score: 2

Distilling to Hybrid Attention Models via KL-Guided Layer Selection

Published: December 23, 2025 | arXiv ID: 2512.20569v1

By: Yanhong Li , Songlin Yang , Shawn Tan and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes big computer brains faster without retraining.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Distilling pretrained softmax attention Transformers into more efficient hybrid architectures that interleave softmax and linear attention layers is a promising approach for improving the inference efficiency of LLMs without requiring expensive pretraining from scratch. A critical factor in the conversion process is layer selection, i.e., deciding on which layers to convert to linear attention variants. This paper describes a simple and efficient recipe for layer selection that uses layer importance scores derived from a small amount of training on generic text data. Once the layers have been selected we use a recent pipeline for the distillation process itself \citep[RADLADS;][]{goldstein2025radlads}, which consists of attention weight transfer, hidden state alignment, KL-based distribution matching, followed by a small amount of finetuning. We find that this approach is more effective than existing approaches for layer selection, including heuristics that uniformly interleave linear attentions based on a fixed ratio, as well as more involved approaches that rely on specialized diagnostic datasets.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Computation and Language