Score: 2

SAC-ViT: Semantic-Aware Clustering Vision Transformer with Early Exit

Published: February 27, 2025 | arXiv ID: 2503.00060v1

By: Youbing Hu , Yun Cheng , Anqi Lu and more

Potential Business Impact:

Makes AI see better with less computer power.

Business Areas:
Image Recognition Data and Analytics, Software

The Vision Transformer (ViT) excels in global modeling but faces deployment challenges on resource-constrained devices due to the quadratic computational complexity of its attention mechanism. To address this, we propose the Semantic-Aware Clustering Vision Transformer (SAC-ViT), a non-iterative approach to enhance ViT's computational efficiency. SAC-ViT operates in two stages: Early Exit (EE) and Semantic-Aware Clustering (SAC). In the EE stage, downsampled input images are processed to extract global semantic information and generate initial inference results. If these results do not meet the EE termination criteria, the information is clustered into target and non-target tokens. In the SAC stage, target tokens are mapped back to the original image, cropped, and embedded. These target tokens are then combined with reused non-target tokens from the EE stage, and the attention mechanism is applied within each cluster. This two-stage design, with end-to-end optimization, reduces spatial redundancy and enhances computational efficiency, significantly boosting overall ViT performance. Extensive experiments demonstrate the efficacy of SAC-ViT, reducing 62% of the FLOPs of DeiT and achieving 1.98 times throughput without compromising performance.

Country of Origin
🇨🇳 🇨🇭 Switzerland, China

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition