Data Efficient Any Transformer-to-Mamba Distillation via Attention Bridge
By: Penghao Wang , Yuhao Zhou , Mengxuan Wu and more
Potential Business Impact:
Teaches new computer models to learn like old ones.
State-space models (SSMs) have emerged as efficient alternatives to Transformers for sequence modeling, offering superior scalability through recurrent structures. However, their training remains costly and the ecosystem around them is far less mature than that of Transformers. Moreover, the structural heterogeneity between SSMs and Transformers makes it challenging to efficiently distill knowledge from pretrained attention models. In this work, we propose Cross-architecture distillation via Attention Bridge (CAB), a novel data-efficient distillation framework that efficiently transfers attention knowledge from Transformer teachers to state-space student models. Unlike conventional knowledge distillation that transfers knowledge only at the output level, CAB enables token-level supervision via a lightweight bridge and flexible layer-wise alignment, improving both efficiency and transferability. We further introduce flexible layer-wise alignment strategies to accommodate architectural discrepancies between teacher and student. Extensive experiments across vision and language domains demonstrate that our method consistently improves the performance of state-space models, even under limited training data, outperforming both standard and cross-architecture distillation methods. Our findings suggest that attention-based knowledge can be efficiently transferred to recurrent models, enabling rapid utilization of Transformer expertise for building a stronger SSM community.
Similar Papers
Understanding and Enhancing Mamba-Transformer Hybrids for Memory Recall and Language Modeling
Computation and Language
Makes AI understand long stories better.
CroSTAta: Cross-State Transition Attention Transformer for Robotic Manipulation
Robotics
Teaches robots to learn from mistakes.
Empirical Evaluation of Knowledge Distillation from Transformers to Subquadratic Language Models
Computation and Language
Makes AI models faster and smaller.