Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization
By: Matteo Gallici, Haitz Sáez de Ocáriz Borde
Potential Business Impact:
Makes AI draw pictures better and in new styles.
Fine-tuning pre-trained generative models with Reinforcement Learning (RL) has emerged as an effective approach for aligning outputs more closely with nuanced human preferences. In this paper, we investigate the application of Group Relative Policy Optimization (GRPO) to fine-tune next-scale visual autoregressive (VAR) models. Our empirical results demonstrate that this approach enables alignment to intricate reward signals derived from aesthetic predictors and CLIP embeddings, significantly enhancing image quality and enabling precise control over the generation style. Interestingly, by leveraging CLIP, our method can help VAR models generalize beyond their initial ImageNet distribution: through RL-driven exploration, these models can generate images aligned with prompts referencing image styles that were absent during pre-training. In summary, we show that RL-based fine-tuning is both efficient and effective for VAR models, benefiting particularly from their fast inference speeds, which are advantageous for online sampling, an aspect that poses significant challenges for diffusion-based alternatives.
Similar Papers
AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning
CV and Pattern Recognition
Makes AI create better, more realistic pictures.
DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO
CV and Pattern Recognition
Helps AI understand videos better by learning smarter.
VAR RL Done Right: Tackling Asynchronous Policy Conflicts in Visual Autoregressive Generation
CV and Pattern Recognition
Teaches AI to create better pictures faster.