Sliding Window Attention Adaptation
By: Yijiong Yu , Jiale Liu , Qingyun Wu and more
Potential Business Impact:
Lets computers understand long stories faster.
The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at https://github.com/yuyijiong/sliding-window-attention-adaptation
Similar Papers
GatedFWA: Linear Flash Windowed Attention with Gated Associative Memory
Machine Learning (CS)
Makes AI models learn faster and remember more.
Paying Attention to Hybrid Attention: Untangling the Issues with Conversion Methods
Machine Learning (CS)
Makes AI models faster and cheaper to train.
Training-free Context-adaptive Attention for Efficient Long Context Modeling
Computation and Language
Makes AI understand long texts faster.