SoLA-Vision: Fine-grained Layer-wise Linear Softmax Hybrid Attention
By: Ruibang Li , Guan Luo , Yiwei Zhang and more
Potential Business Impact:
Makes computer vision faster and more accurate.
Standard softmax self-attention excels in vision tasks but incurs quadratic complexity O(N^2), limiting high-resolution deployment. Linear attention reduces the cost to O(N), yet its compressed state representations can impair modeling capacity and accuracy. We present an analytical study that contrasts linear and softmax attention for visual representation learning from a layer-stacking perspective. We further conduct systematic experiments on layer-wise hybridization patterns of linear and softmax attention. Our results show that, compared with rigid intra-block hybrid designs, fine-grained layer-wise hybridization can match or surpass performance while requiring fewer softmax layers. Building on these findings, we propose SoLA-Vision (Softmax-Linear Attention Vision), a flexible layer-wise hybrid attention backbone that enables fine-grained control over how linear and softmax attention are integrated. By strategically inserting a small number of global softmax layers, SoLA-Vision achieves a strong trade-off between accuracy and computational cost. On ImageNet-1K, SoLA-Vision outperforms purely linear and other hybrid attention models. On dense prediction tasks, it consistently surpasses strong baselines by a considerable margin. Code will be released.
Similar Papers
InfiniteVL: Synergizing Linear and Sparse Attention for Highly-Efficient, Unlimited-Input Vision-Language Models
CV and Pattern Recognition
Lets AI remember long videos and stories.
Lightweight Backbone Networks Only Require Adaptive Lightweight Self-Attention Mechanisms
CV and Pattern Recognition
Makes AI see and understand images faster.
MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head
CV and Pattern Recognition
Makes AI smarter and faster for images and words.