MOR-VIT: Efficient Vision Transformer with Mixture-of-Recursions
By: YiZhou Li
Potential Business Impact:
Lets computers see better with less work.
Vision Transformers (ViTs) have achieved remarkable success in image recognition, yet standard ViT architectures are hampered by substantial parameter redundancy and high computational cost, limiting their practical deployment. While recent efforts on efficient ViTs primarily focus on static model compression or token-level sparsification, they remain constrained by fixed computational depth for all tokens. In this work, we present MoR-ViT, a novel vision transformer framework that, for the first time, incorporates a token-level dynamic recursion mechanism inspired by the Mixture-of-Recursions (MoR) paradigm. This approach enables each token to adaptively determine its processing depth, yielding a flexible and input-dependent allocation of computational resources. Extensive experiments on ImageNet-1K and transfer benchmarks demonstrate that MoR-ViT not only achieves state-of-the-art accuracy with up to 70% parameter reduction and 2.5x inference acceleration, but also outperforms leading efficient ViT baselines such as DynamicViT and TinyViT under comparable conditions. These results establish dynamic recursion as an effective strategy for efficient vision transformers and open new avenues for scalable and deployable deep learning models in real-world scenarios.
Similar Papers
MOR-VIT: Efficient Vision Transformer with Mixture-of-Recursions
CV and Pattern Recognition
Lets computers see better with less work.
CoMViT: An Efficient Vision Backbone for Supervised Classification in Medical Imaging
CV and Pattern Recognition
Makes AI see medical pictures better with less power.
Harnessing the Computation Redundancy in ViTs to Boost Adversarial Transferability
CV and Pattern Recognition
Makes computer vision models easier to trick.