Make Your Training Flexible: Towards Deployment-Efficient Video Models
By: Chenting Wang , Kunchang Li , Tianxiang Jiang and more
Potential Business Impact:
Makes videos train faster, using less data.
Popular video training methods mainly operate on a fixed number of tokens sampled from a predetermined spatiotemporal grid, resulting in sub-optimal accuracy-computation trade-offs due to inherent video redundancy. They also lack adaptability to varying computational budgets for downstream tasks, hindering applications of the most competitive model in real-world scenes. We thus propose a new test setting, Token Optimization, for maximized input information across budgets, which optimizes the size-limited set of input tokens through token selection from more suitably sampled videos. To this end, we propose a novel augmentation tool termed Flux. By making the sampling grid flexible and leveraging token selection, it is easily adopted in most popular video training frameworks, boosting model robustness with nearly no additional cost. We integrate Flux in large-scale video pre-training, and the resulting FluxViT establishes new state-of-the-art results across extensive tasks at standard costs. Notably, with 1/4 tokens only, it can still match the performance of previous state-of-the-art models with Token Optimization, yielding nearly 90\% savings. All models and data are available at https://github.com/OpenGVLab/FluxViT.
Similar Papers
TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
CV and Pattern Recognition
Lets computers understand pictures better, faster.
Video, How Do Your Tokens Merge?
CV and Pattern Recognition
Makes videos play faster without losing quality.
Progressive Growing of Video Tokenizers for Temporally Compact Latent Spaces
CV and Pattern Recognition
Makes videos smaller for faster AI creation.