StageVAR: Stage-Aware Acceleration for Visual Autoregressive Models
By: Senmao Li , Kai Wang , Salman Khan and more
Potential Business Impact:
Makes drawing pictures with computers much faster.
Visual Autoregressive (VAR) modeling departs from the next-token prediction paradigm of traditional Autoregressive (AR) models through next-scale prediction, enabling high-quality image generation. However, the VAR paradigm suffers from sharply increased computational complexity and running time at large-scale steps. Although existing acceleration methods reduce runtime for large-scale steps, but rely on manual step selection and overlook the varying importance of different stages in the generation process. To address this challenge, we present StageVAR, a systematic study and stage-aware acceleration framework for VAR models. Our analysis shows that early steps are critical for preserving semantic and structural consistency and should remain intact, while later steps mainly refine details and can be pruned or approximated for acceleration. Building on these insights, StageVAR introduces a plug-and-play acceleration strategy that exploits semantic irrelevance and low-rank properties in late-stage computations, without requiring additional training. Our proposed StageVAR achieves up to 3.4x speedup with only a 0.01 drop on GenEval and a 0.26 decrease on DPG, consistently outperforming existing acceleration baselines. These results highlight stage-aware design as a powerful principle for efficient visual autoregressive image generation.
Similar Papers
Markovian Scale Prediction: A New Era of Visual Autoregressive Generation
CV and Pattern Recognition
Makes AI draw pictures faster and use less power.
Seg-VAR: Image Segmentation with Visual Autoregressive Modeling
CV and Pattern Recognition
Makes computers perfectly outline any object in pictures.
ActVAR: Activating Mixtures of Weights and Tokens for Efficient Visual Autoregressive Generation
CV and Pattern Recognition
Makes AI draw pictures faster and cheaper.