Neighbor-Aware Token Reduction via Hilbert Curve for Vision Transformers
By: Yunge Li, Lanyu Xu
Potential Business Impact:
Makes computer vision faster by smarter grouping.
Vision Transformers (ViTs) have achieved remarkable success in visual recognition tasks, but redundant token representations limit their computational efficiency. Existing token merging and pruning strategies often overlook spatial continuity and neighbor relationships, resulting in the loss of local context. This paper proposes novel neighbor-aware token reduction methods based on Hilbert curve reordering, which explicitly preserves the neighbor structure in a 2D space using 1D sequential representations. Our method introduces two key strategies: Neighbor-Aware Pruning (NAP) for selective token retention and Merging by Adjacent Token similarity (MAT) for local token aggregation. Experiments demonstrate that our approach achieves state-of-the-art accuracy-efficiency trade-offs compared to existing methods. This work highlights the importance of spatial continuity and neighbor structure, offering new insights for the architectural optimization of ViTs.
Similar Papers
HEART-VIT: Hessian-Guided Efficient Dynamic Attention and Token Pruning in Vision Transformer
CV and Pattern Recognition
Makes AI image tools faster and use less power.
SPOT: Sparsification with Attention Dynamics via Token Relevance in Vision Transformers
CV and Pattern Recognition
Makes computer vision faster by removing unneeded parts.
Frequency-Aware Token Reduction for Efficient Vision Transformer
CV and Pattern Recognition
Makes computer vision faster and smarter.