Focus: A Streaming Concentration Architecture for Efficient Vision-Language Models
By: Chiyue Wei , Cong Guo , Junyao Zhang and more
Potential Business Impact:
Makes AI watch videos faster and use less power.
Vision-Language Models (VLMs) have demonstrated strong performance on tasks such as video captioning and visual question answering. However, their growing scale and video-level inputs lead to significant computational and memory overhead, posing challenges for real-time deployment on hardware accelerators. While prior work attempts to reduce redundancy via token pruning or merging, these methods typically operate at coarse granularity and incur high runtime overhead due to global token-level operations. In this study, we propose Focus, a Streaming Concentration Architecture that efficiently accelerates VLM inference through progressive, fine-grained redundancy elimination. Focus introduces a multilevel concentration paradigm that hierarchically compresses vision-language inputs at three levels: (1) semantic-guided token pruning based on textual prompts, (2) spatial-temporal block-level concentration using localized comparisons, and (3) vector-level redundancy removal via motion-aware matching. All concentration steps are tightly co-designed with the architecture to support streaming-friendly, on-chip execution. Focus leverages GEMM tiling, convolution-style layout, and cross-modal attention to minimize off-chip access while enabling high throughput. Implemented as a modular unit within a systolic-array accelerator, Focus achieves a 2.4x speedup and 3.3x reduction in energy, significantly outperforming state-of-the-art accelerators in both performance and energy efficiency. Full-stack implementation of Focus is open-sourced at https://github.com/dubcyfor3/Focus.
Similar Papers
VFocus: Better Verilog Generation from Large Language Model via Focused Reasoning
Hardware Architecture
Makes computer chips work right by fixing code.
Weaving Context Across Images: Improving Vision-Language Models through Focus-Centric Visual Chains
CV and Pattern Recognition
Helps computers understand many pictures at once.
Mitigating Cross-Image Information Leakage in LVLMs for Multi-Image Tasks
CV and Pattern Recognition
Helps computers understand many pictures at once.