Score: 0

Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI

Published: July 13, 2025 | arXiv ID: 2507.09702v1

By: Phat Nguyen, Ngai-Man Cheung

Potential Business Impact:

Makes AI see faster by removing extra details.

Business Areas:
Text Analytics Data and Analytics, Software

Token compression techniques have recently emerged as powerful tools for accelerating Vision Transformer (ViT) inference in computer vision. Due to the quadratic computational complexity with respect to the token sequence length, these methods aim to remove less informative tokens before the attention layers to improve inference throughput. While numerous studies have explored various accuracy-efficiency trade-offs on large-scale ViTs, two critical gaps remain. First, there is a lack of unified survey that systematically categorizes and compares token compression approaches based on their core strategies (e.g., pruning, merging, or hybrid) and deployment settings (e.g., fine-tuning vs. plug-in). Second, most benchmarks are limited to standard ViT models (e.g., ViT-B, ViT-L), leaving open the question of whether such methods remain effective when applied to structurally compressed transformers, which are increasingly deployed on resource-constrained edge devices. To address these gaps, we present the first systematic taxonomy and comparative study of token compression methods, and we evaluate representative techniques on both standard and compact ViT architectures. Our experiments reveal that while token compression methods are effective for general-purpose ViTs, they often underperform when directly applied to compact designs. These findings not only provide practical insights but also pave the way for future research on adapting token optimization techniques to compact transformer-based networks for edge AI and AI agent applications.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition