Parallel Vision Token Scheduling for Fast and Accurate Multimodal LMMs Inference
By: Wengyi Zhan , Mingbao Lin , Zhihang Lin and more
Potential Business Impact:
Makes AI understand pictures much faster.
Multimodal large language models (MLLMs) deliver impressive vision-language reasoning but suffer steep inference latency because self-attention scales quadratically with sequence length and thousands of visual tokens contributed by high-resolution images. Naively pruning less-informative visual tokens reduces this burden, yet indiscriminate removal can strip away contextual cues essential for background or fine-grained questions, undermining accuracy. In this paper, we present ParVTS (Parallel Vision Token Scheduling), a training-free scheduling framework that partitions visual tokens into subject and non-subject groups, processes them in parallel to transfer their semantics into question tokens, and discards the non-subject path mid-inference to reduce computation. This scheduling reduces computational complexity, requires no heuristics or additional modules, and is compatible with diverse existing MLLM architectures. Experiments across multiple MLLM backbones show that ParVTS prunes up to 88.9% of visual tokens with minimal performance drop, achieving 1.77x speedup and 70% FLOPs reduction.
Similar Papers
MMTok: Multimodal Coverage Maximization for Efficient Inference of VLMs
CV and Pattern Recognition
Makes AI understand pictures faster and better.
Direct Visual Grounding by Directing Attention of Visual Tokens
CV and Pattern Recognition
Makes AI better at answering questions about pictures.
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.