Score: 0

Modality Inflation: Energy Characterization and Optimization Opportunities for MLLM Inference

Published: December 27, 2025 | arXiv ID: 2512.22695v1

By: Mona Moghadampanah , Adib Rezaei Shahmirzadi , Farhana Amin and more

Potential Business Impact:

Makes AI models use less power for images.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal large language models (MLLMs) are built on text-only LLMs by incorporating additional modalities, enabling multimodal understanding and a broader range of applications. However, these additions introduce a previously unexplored energy trade-off across modalities that remains poorly understood, as most prior work focuses on text-only models. In this paper, we examine modality inflation, a key source of inefficiency in which multimodal inputs increase inference workloads through extra encoding stages and expanded token sequences. We provide the first detailed, stage-level analysis of energy consumption in MLLM inference by breaking the pipeline into vision encoding, prefill, and decoding stages. Using four representative MLLMs evaluated on NVIDIA A100 GPU, we quantify the additional energy required for multimodal inference compared to text-only baselines, observing overheads ranging from 17% to 94% across models for identical inputs. Our results show that energy bottlenecks differ widely across model architectures, stemming either from compute-heavy vision encoders or from the downstream impact of large visual token sequences during prefill. By examining GPU power traces, we further uncover substantial GPU underutilization during multimodal execution and show that input complexity leads to markedly different energy scaling behaviors across models. Finally, we demonstrate that stage-wise dynamic voltage and frequency scaling (DVFS) is an effective optimization, allowing energy savings with only modest performance impact. Together, these findings offer practical insights and concrete guidance for designing more energy-efficient multimodal LLM serving systems.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing