MARC: Memory-Augmented RL Token Compression for Efficient Video Understanding
By: Peiran Wu , Zhuorui Yu , Yunze Liu and more
Potential Business Impact:
Makes computers understand videos using less data.
The rapid progress of large language models (LLMs) has laid the foundation for multimodal models. However, visual language models (VLMs) still face heavy computational costs when extended from images to videos due to high frame rates and long durations. Token compression is a promising solution, yet most existing training-free methods cause information loss and performance degradation. To overcome this, we propose \textbf{Memory-Augmented Reinforcement Learning-based Token Compression (MARC)}, which integrates structured retrieval and RL-based distillation. MARC adopts a \textit{retrieve-then-compress} strategy using a \textbf{Visual Memory Retriever (VMR)} to select key clips and a \textbf{Compression Group Relative Policy Optimization (C-GRPO)} framework to distil reasoning ability from a teacher to a student model. Experiments on six video benchmarks show that MARC achieves near-baseline accuracy using only one frame's tokens -- reducing visual tokens by \textbf{95\%}, GPU memory by \textbf{72\%}, and latency by \textbf{23.9\%}. This demonstrates its potential for efficient, real-time video understanding in resource-constrained settings such as video QA, surveillance, and autonomous driving.
Similar Papers
Compressor-VLA: Instruction-Guided Visual Token Compression for Efficient Robotic Manipulation
Robotics
Helps robots see and act faster.
Video-XL-Pro: Reconstructive Token Compression for Extremely Long Video Understanding
CV and Pattern Recognition
Lets computers watch and understand very long videos.
Learning Free Token Reduction for Multi-Modal Large Language Models
CV and Pattern Recognition
Makes AI understand videos faster and cheaper.