ActVAR: Activating Mixtures of Weights and Tokens for Efficient Visual Autoregressive Generation
By: Kaixin Zhang , Ruiqing Yang , Yuan Zhang and more
Potential Business Impact:
Makes AI draw pictures faster and cheaper.
Visual Autoregressive (VAR) models enable efficient image generation via next-scale prediction but face escalating computational costs as sequence length grows. Existing static pruning methods degrade performance by permanently removing weights or tokens, disrupting pretrained dependencies. To address this, we propose ActVAR, a dynamic activation framework that introduces dual sparsity across model weights and token sequences to enhance efficiency without sacrificing capacity. ActVAR decomposes feedforward networks (FFNs) into lightweight expert sub-networks and employs a learnable router to dynamically select token-specific expert subsets based on content. Simultaneously, a gated token selector identifies high-update-potential tokens for computation while reconstructing unselected tokens to preserve global context and sequence alignment. Training employs a two-stage knowledge distillation strategy, where the original VAR model supervises the learning of routing and gating policies to align with pretrained knowledge. Experiments on the ImageNet $256\times 256$ benchmark demonstrate that ActVAR achieves up to $21.2\%$ FLOPs reduction with minimal performance degradation.
Similar Papers
DiverseVAR: Balancing Diversity and Quality of Next-Scale Visual Autoregressive Models
CV and Pattern Recognition
Makes AI create more different pictures from the same words.
FVAR: Visual Autoregressive Modeling via Next Focus Prediction
CV and Pattern Recognition
Makes pictures clearer by fixing blur.
Progressive Supernet Training for Efficient Visual Autoregressive Modeling
CV and Pattern Recognition
Makes AI image creation faster and use less memory.