Visual Autoregressive Models Beat Diffusion Models on Inference Time Scaling
By: Erik Riise, Mehmet Onurcan Kaya, Dim P. Papadopoulos
Potential Business Impact:
Makes AI draw better pictures faster.
While inference-time scaling through search has revolutionized Large Language Models, translating these gains to image generation has proven difficult. Recent attempts to apply search strategies to continuous diffusion models show limited benefits, with simple random sampling often performing best. We demonstrate that the discrete, sequential nature of visual autoregressive models enables effective search for image generation. We show that beam search substantially improves text-to-image generation, enabling a 2B parameter autoregressive model to outperform a 12B parameter diffusion model across benchmarks. Systematic ablations show that this advantage comes from the discrete token space, which allows early pruning and computational reuse, and our verifier analysis highlights trade-offs between speed and reasoning capability. These findings suggest that model architecture, not just scale, is critical for inference-time optimization in visual generation.
Similar Papers
Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation
CV and Pattern Recognition
Creates pictures from words much faster.
Progress by Pieces: Test-Time Scaling for Autoregressive Image Generation
CV and Pattern Recognition
Makes AI pictures better and faster.
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
CV and Pattern Recognition
Creates amazing pictures from words.