Token Compression Meets Compact Vision Transformers: A Survey and Comparative Evaluation for Edge AI
By: Phat Nguyen, Ngai-Man Cheung
Potential Business Impact:
Makes AI see faster by removing extra details.
Token compression techniques have recently emerged as powerful tools for accelerating Vision Transformer (ViT) inference in computer vision. Due to the quadratic computational complexity with respect to the token sequence length, these methods aim to remove less informative tokens before the attention layers to improve inference throughput. While numerous studies have explored various accuracy-efficiency trade-offs on large-scale ViTs, two critical gaps remain. First, there is a lack of unified survey that systematically categorizes and compares token compression approaches based on their core strategies (e.g., pruning, merging, or hybrid) and deployment settings (e.g., fine-tuning vs. plug-in). Second, most benchmarks are limited to standard ViT models (e.g., ViT-B, ViT-L), leaving open the question of whether such methods remain effective when applied to structurally compressed transformers, which are increasingly deployed on resource-constrained edge devices. To address these gaps, we present the first systematic taxonomy and comparative study of token compression methods, and we evaluate representative techniques on both standard and compact ViT architectures. Our experiments reveal that while token compression methods are effective for general-purpose ViTs, they often underperform when directly applied to compact designs. These findings not only provide practical insights but also pave the way for future research on adapting token optimization techniques to compact transformer-based networks for edge AI and AI agent applications.
Similar Papers
Vision Transformers on the Edge: A Comprehensive Survey of Model Compression and Acceleration Strategies
CV and Pattern Recognition
Makes smart computer vision work on small devices.
Token Transforming: A Unified and Training-Free Token Compression Framework for Vision Transformer Acceleration
CV and Pattern Recognition
Makes AI see faster with less work.
Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods
CV and Pattern Recognition
Makes AI understand pictures faster and better.