ToFu: Visual Tokens Reduction via Fusion for Multi-modal, Multi-patch, Multi-image Task
By: Vittorio Pippi , Matthieu Guillaumin , Silvia Cascianelli and more
Potential Business Impact:
Fuses pictures to make AI understand more.
Large Multimodal Models (LMMs) are powerful tools that are capable of reasoning and understanding multimodal information beyond text and language. Despite their entrenched impact, the development of LMMs is hindered by the higher computational requirements compared to their unimodal counterparts. One of the main causes of this is the large amount of tokens needed to encode the visual input, which is especially evident for multi-image multimodal tasks. Recent approaches to reduce visual tokens depend on the visual encoder architecture, require fine-tuning the LLM to maintain the performance, and only consider single-image scenarios. To address these limitations, we propose ToFu, a visual encoder-agnostic, training-free Token Fusion strategy that combines redundant visual tokens of LMMs for high-resolution, multi-image, tasks. The core intuition behind our method is straightforward yet effective: preserve distinctive tokens while combining similar ones. We achieve this by sequentially examining visual tokens and deciding whether to merge them with others or keep them as separate entities. We validate our approach on the well-established LLaVA-Interleave Bench, which covers challenging multi-image tasks. In addition, we push to the extreme our method by testing it on a newly-created benchmark, ComPairs, focused on multi-image comparisons where a larger amount of images and visual tokens are inputted to the LMMs. Our extensive analysis, considering several LMM architectures, demonstrates the benefits of our approach both in terms of efficiency and performance gain.
Similar Papers
Learning Compact Vision Tokens for Efficient Large Multimodal Models
CV and Pattern Recognition
Makes AI understand pictures much faster.
FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens
CV and Pattern Recognition
Lets computers understand pictures and words together.
Towards Adaptive Visual Token Pruning for Large Multimodal Models
CV and Pattern Recognition
Makes AI understand pictures faster and cheaper.