Score: 4

Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression

Published: May 26, 2025 | arXiv ID: 2505.19602v1

By: Kunjun Li , Zigeng Chen , Cheng-Yen Yang and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Makes AI image creation use much less computer memory.

Business Areas:
Image Recognition Data and Analytics, Software

Visual Autoregressive (VAR) modeling has garnered significant attention for its innovative next-scale prediction approach, which yields substantial improvements in efficiency, scalability, and zero-shot generalization. Nevertheless, the coarse-to-fine methodology inherent in VAR results in exponential growth of the KV cache during inference, causing considerable memory consumption and computational redundancy. To address these bottlenecks, we introduce ScaleKV, a novel KV cache compression framework tailored for VAR architectures. ScaleKV leverages two critical observations: varying cache demands across transformer layers and distinct attention patterns at different scales. Based on these insights, ScaleKV categorizes transformer layers into two functional groups: drafters and refiners. Drafters exhibit dispersed attention across multiple scales, thereby requiring greater cache capacity. Conversely, refiners focus attention on the current token map to process local details, consequently necessitating substantially reduced cache capacity. ScaleKV optimizes the multi-scale inference pipeline by identifying scale-specific drafters and refiners, facilitating differentiated cache management tailored to each scale. Evaluation on the state-of-the-art text-to-image VAR model family, Infinity, demonstrates that our approach effectively reduces the required KV cache memory to 10% while preserving pixel-level fidelity.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡ΈπŸ‡¬ United States, Singapore

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)