Score: 0

SoLA-Vision: Fine-grained Layer-wise Linear Softmax Hybrid Attention

Published: January 16, 2026 | arXiv ID: 2601.11164v1

By: Ruibang Li , Guan Luo , Yiwei Zhang and more

Potential Business Impact:

Makes computer vision faster and more accurate.

Business Areas:
Image Recognition Data and Analytics, Software

Standard softmax self-attention excels in vision tasks but incurs quadratic complexity O(N^2), limiting high-resolution deployment. Linear attention reduces the cost to O(N), yet its compressed state representations can impair modeling capacity and accuracy. We present an analytical study that contrasts linear and softmax attention for visual representation learning from a layer-stacking perspective. We further conduct systematic experiments on layer-wise hybridization patterns of linear and softmax attention. Our results show that, compared with rigid intra-block hybrid designs, fine-grained layer-wise hybridization can match or surpass performance while requiring fewer softmax layers. Building on these findings, we propose SoLA-Vision (Softmax-Linear Attention Vision), a flexible layer-wise hybrid attention backbone that enables fine-grained control over how linear and softmax attention are integrated. By strategically inserting a small number of global softmax layers, SoLA-Vision achieves a strong trade-off between accuracy and computational cost. On ImageNet-1K, SoLA-Vision outperforms purely linear and other hybrid attention models. On dense prediction tasks, it consistently surpasses strong baselines by a considerable margin. Code will be released.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition