OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models
By: Keda Tao , Kele Shao , Bohan Yu and more
Potential Business Impact:
Makes AI understand videos and sounds faster.
Omnimodal large language models (OmniLLMs) have attracted increasing research attention of late towards unified audio-video understanding, wherein processing audio-video token sequences creates a significant computational bottleneck, however. Existing token compression methods have yet to accommodate this emerging need of jointly compressing multimodal tokens. To bridge this gap, we present OmniZip, a training-free, audio-guided audio-visual token-compression framework that optimizes multimodal token representation and accelerates inference. Specifically, OmniZip first identifies salient audio tokens, then computes an audio retention score for each time group to capture information density, thereby dynamically guiding video token pruning and preserving cues from audio anchors enhanced by cross-modal similarity. For each time window, OmniZip compresses the video tokens using an interleaved spatio-temporal scheme. Extensive empirical results demonstrate the merits of OmniZip - it achieves 3.42X inference speedup and 1.4X memory reduction over other top-performing counterparts, while maintaining performance with no training.
Similar Papers
InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue
CV and Pattern Recognition
Lets computers understand and talk about videos.
AudioGen-Omni: A Unified Multimodal Diffusion Transformer for Video-Synchronized Audio, Speech, and Song Generation
Sound
Creates sound from silent videos.
MGM-Omni: Scaling Omni LLMs to Personalized Long-Horizon Speech
Sound
Computer talks like you, understands everything.