MobileViCLIP: An Efficient Video-Text Model for Mobile Devices
By: Min Yang , Zihan Jia , Zhilin Dai and more
Potential Business Impact:
Makes phone apps understand videos faster.
Efficient lightweight neural networks are with increasing attention due to their faster reasoning speed and easier deployment on mobile devices. However, existing video pre-trained models still focus on the common ViT architecture with high latency, and few works attempt to build efficient architecture on mobile devices. This paper bridges this gap by introducing temporal structural reparameterization into an efficient image-text model and training it on a large-scale high-quality video-text dataset, resulting in an efficient video-text model that can run on mobile devices with strong zero-shot classification and retrieval capabilities, termed as MobileViCLIP. In particular, in terms of inference speed on mobile devices, our MobileViCLIP-Small is 55.4x times faster than InternVideo2-L14 and 6.7x faster than InternVideo2-S14. In terms of zero-shot retrieval performance, our MobileViCLIP-Small obtains similar performance as InternVideo2-L14 and obtains 6.9\% better than InternVideo2-S14 on MSR-VTT. The code is available at https://github.com/MCG-NJU/MobileViCLIP.
Similar Papers
MobileCLIP2: Improving Multi-Modal Reinforced Training
CV and Pattern Recognition
Makes phones understand pictures and words faster.
MoCLIP-Lite: Efficient Video Recognition by Fusing CLIP with Motion Vectors
CV and Pattern Recognition
Lets computers understand videos faster and cheaper.
uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data
CV and Pattern Recognition
Helps computers understand pictures in many languages.