Lightweight Backbone Networks Only Require Adaptive Lightweight Self-Attention Mechanisms
By: Fengyun Li , Chao Zheng , Yangyang Fang and more
Potential Business Impact:
Makes AI see and understand images faster.
Currently, lightweight hybrid backbone networks have partially alleviated the issue of computational saturation, but the imbalance in computational efficiencys between convolutional neural networks (CNNs) and attention mechanisms is becoming increasingly apparent. Specifically, although linear attention mechanisms and their variants have made progress in lightweight design, they still fail to meet the demands of hybrid models for long-sequence modeling. On the other hand, existing lightweight SoftMax attention computations typically reduce the feature map to a fixed size to decrease the number of sequences, thereby compressing the computational scale. However, the process of determining the feature map reduction ratio is cumbersome, and computational saturation issues still persist. To address this issue, this paper proposes a lightweight SoftMax attention mechanism with adaptive feature map sizes, named Fast Window Attention (FWA), which generates a small number of key sequences (Key and Value) through window aggregation for attention computation. Additionally, it explains the rationality of using ReLU to simulate SoftMax operations in lightweight global attention mechanisms. Finally, the paper designs a global-local feature fusion mechanism and combines it with GhostNet to propose a lightweight hybrid backbone network, LOLViT. Through visual tasks such as classification (ImageNet 1K), detection (COCO 2017), and segmentation (BDD100K), along with extensive ablation studies, it is demonstrated that LOLViT outperforms CNN models of the same level in both inference speed and model accuracy. Notably, the inference speed of LOLViT-X is 5x that of MobileViT-X.
Similar Papers
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computer vision faster and smarter.
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computers see better with less effort.
LAPX: Lightweight Hourglass Network with Global Context
CV and Pattern Recognition
Helps computers see people's body movements quickly.