Video, How Do Your Tokens Merge?
By: Sam Pollard, Michael Wray
Potential Business Impact:
Makes videos play faster without losing quality.
Video transformer models require huge amounts of compute resources due to the spatio-temporal scaling of the input. Tackling this, recent methods have proposed to drop or merge tokens for image models, whether randomly or via learned methods. Merging tokens has many benefits: it can be plugged into any vision transformer, does not require model re-training, and it propagates information that would otherwise be dropped through the model. Before now, video token merging has not been evaluated on temporally complex datasets for video understanding. In this work, we explore training-free token merging for video to provide comprehensive experiments and find best practices across four video transformers on three datasets that exhibit coarse and fine-grained action recognition. Our results showcase the benefits of video token merging with a speedup of around $2.5$X while maintaining accuracy (avg. $-0.55\%$ for ViViT). Code available at https://github.com/sjpollard/video-how-do-your-tokens-merge.
Similar Papers
Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs
CV and Pattern Recognition
Makes videos understandable with less computer power.
Make Your Training Flexible: Towards Deployment-Efficient Video Models
CV and Pattern Recognition
Makes videos train faster, using less data.
Adaptive Token Merging for Efficient Transformer Semantic Communication at the Edge
Machine Learning (CS)
Makes smart computer programs run faster, cheaper.