Towards Efficient Vision State Space Models via Token Merging
By: Jinyoung Park, Minseok Son, Changick Kim
Potential Business Impact:
Makes computer vision models faster and smaller.
State Space Models (SSMs) have emerged as powerful architectures in computer vision, yet improving their computational efficiency remains crucial for practical and scalable deployment.While token reduction serves as an effective approach for model efficiency, applying it to SSMs requires careful consideration of their unique sequential modeling capabilities.In this work, we propose MaMe, a token-merging strategy tailored for SSM-based vision models.MaMe addresses two key challenges: quantifying token importance and preserving sequential properties. Our approach leverages the state transition parameter $\mathbf{\Delta}$ as an informativeness measure and introduces strategic token arrangements to preserve sequential information flow.Extensive experiments demonstrate that MaMe achieves superior efficiency-performance trade-offs for both fine-tuned and off-the-shelf models. Particularly, our approach maintains robustness even under aggressive token reduction where existing methods undergo significant performance degradation.Beyond image classification, MaMe shows strong generalization capabilities across video and audio domains, establishing an effective approach for enhancing efficiency in diverse SSM applications.
Similar Papers
First-order State Space Model for Lightweight Image Super-resolution
CV and Pattern Recognition
Makes pictures clearer with smarter computer math.
ToMA: Token Merge with Attention for Diffusion Models
Machine Learning (CS)
Makes AI image creation much faster.
X-VMamba: Explainable Vision Mamba
CV and Pattern Recognition
Shows how computer vision "sees" medical images.