SAC-ViT: Semantic-Aware Clustering Vision Transformer with Early Exit
By: Youbing Hu , Yun Cheng , Anqi Lu and more
Potential Business Impact:
Makes AI see better with less computer power.
The Vision Transformer (ViT) excels in global modeling but faces deployment challenges on resource-constrained devices due to the quadratic computational complexity of its attention mechanism. To address this, we propose the Semantic-Aware Clustering Vision Transformer (SAC-ViT), a non-iterative approach to enhance ViT's computational efficiency. SAC-ViT operates in two stages: Early Exit (EE) and Semantic-Aware Clustering (SAC). In the EE stage, downsampled input images are processed to extract global semantic information and generate initial inference results. If these results do not meet the EE termination criteria, the information is clustered into target and non-target tokens. In the SAC stage, target tokens are mapped back to the original image, cropped, and embedded. These target tokens are then combined with reused non-target tokens from the EE stage, and the attention mechanism is applied within each cluster. This two-stage design, with end-to-end optimization, reduces spatial redundancy and enhances computational efficiency, significantly boosting overall ViT performance. Extensive experiments demonstrate the efficacy of SAC-ViT, reducing 62% of the FLOPs of DeiT and achieving 1.98 times throughput without compromising performance.
Similar Papers
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computers see better with less effort.
A Lightweight Convolution and Vision Transformer integrated model with Multi-scale Self-attention Mechanism
CV and Pattern Recognition
Makes computer vision faster and smarter.
ECViT: Efficient Convolutional Vision Transformer with Local-Attention and Multi-scale Stages
CV and Pattern Recognition
Makes AI see pictures faster and better.