Atom: Efficient On-Device Video-Language Pipelines Through Modular Reuse
By: Kunjal Panchal , Saayan Mitra , Somdeb Sarkhel and more
Recent advances in video-language models have enabled powerful applications like video retrieval, captioning, and assembly. However, executing such multi-stage pipelines efficiently on mobile devices remains challenging due to redundant model loads and fragmented execution. We introduce Atom, an on-device system that restructures video-language pipelines for fast and efficient execution. Atom decomposes a billion-parameter model into reusable modules, such as the visual encoder and language decoder, and reuses them across subtasks like captioning, reasoning, and indexing. This reuse-centric design eliminates repeated model loading and enables parallel execution, reducing end-to-end latency without sacrificing performance. On commodity smartphones, Atom achieves 27--33% faster execution compared to non-reuse baselines, with only marginal performance drop ($\leq$ 2.3 Recall@1 in retrieval, $\leq$ 1.5 CIDEr in captioning). These results position Atom as a practical, scalable approach for efficient video-language understanding on edge devices.
Similar Papers
ATOM: A Pretrained Neural Operator for Multitask Molecular Dynamics
Machine Learning (CS)
Speeds up computer simulations of molecules.
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
Artificial Intelligence
Updates computer knowledge instantly from text.
MobileViCLIP: An Efficient Video-Text Model for Mobile Devices
CV and Pattern Recognition
Makes phone apps understand videos faster.