Fourier-VLM: Compressing Vision Tokens in the Frequency Domain for Large Vision-Language Models
By: Huanyu Wang , Jushi Kai , Haoli Bai and more
Potential Business Impact:
Makes computers understand pictures much faster.
Vision-Language Models (VLMs) typically replace the predefined image placeholder token (<image>) in textual instructions with visual features from an image encoder, forming the input to a backbone Large Language Model (LLM). However, the large number of vision tokens significantly increases the context length, leading to high computational overhead and inference latency. While previous efforts mitigate this by selecting only important visual features or leveraging learnable queries to reduce token count, they often compromise performance or introduce substantial extra costs. In response, we propose Fourier-VLM, a simple yet efficient method that compresses visual representations in the frequency domain. Our approach is motivated by the observation that vision features output from the vision encoder exhibit concentrated energy in low-frequency components. Leveraging this, we apply a low-pass filter to the vision features using a two-dimensional Discrete Cosine Transform (DCT). Notably, the DCT is efficiently computed via the Fast Fourier Transform (FFT) operator with a time complexity of $\mathcal{O}(n\log n)$, minimizing the extra computational cost while introducing no additional parameters. Extensive experiments across various image-based benchmarks demonstrate that Fourier-VLM achieves competitive performance with strong generalizability across both LLaVA and Qwen-VL architectures. Crucially, it reduce inference FLOPs by up to 83.8% and boots generation speed by 31.2% compared to LLaVA-v1.5, highlighting the superior efficiency and practicality.
Similar Papers
Fourier-VLM: Compressing Vision Tokens in the Frequency Domain for Large Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures much faster.
Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models
CV and Pattern Recognition
Lets computers understand pictures even in bad light.
Fourier-Attentive Representation Learning: A Fourier-Guided Framework for Few-Shot Generalization in Vision-Language Models
CV and Pattern Recognition
Teaches computers to see style and shape separately.