TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models
By: Jaewoo Lee , Keyang Xuan , Chanakya Ekbote and more
Potential Business Impact:
Makes AI models smaller without losing smarts.
Multimodal Large Language Models (MLLMs) have shown remarkable versatility in understanding diverse multimodal data and tasks. However, these capabilities come with an increased model scale. While post-training pruning reduces model size in unimodal models, its application to MLLMs often yields limited success. Our analysis discovers that conventional methods fail to account for the unique token attributes across layers and modalities inherent to MLLMs. Inspired by this observation, we propose TAMP, a simple yet effective pruning framework tailored for MLLMs, featuring two key components: (1) Diversity-Aware Sparsity, which adjusts sparsity ratio per layer based on diversities among multimodal output tokens, preserving more parameters in high-diversity layers; and (2) Adaptive Multimodal Input Activation, which identifies representative multimodal input tokens using attention scores to guide unstructured weight pruning. We validate our method on two state-of-the-art MLLMs: LLaVA-NeXT, designed for vision-language tasks, and VideoLLaMA2, capable of processing audio, visual, and language modalities. Empirical experiments across various multimodal evaluation benchmarks demonstrate that each component of our approach substantially outperforms existing pruning techniques.
Similar Papers
Efficient LLMs with AMP: Attention Heads and MLP Pruning
Machine Learning (CS)
Makes smart computer programs run faster and smaller.
Towards Adaptive Visual Token Pruning for Large Multimodal Models
CV and Pattern Recognition
Makes AI understand pictures faster and cheaper.
Towards Extreme Pruning of LLMs with Plug-and-Play Mixed Sparsity
Computation and Language
Makes AI models smaller and faster.