Multi-scale Autoregressive Models are Laplacian, Discrete, and Latent Diffusion Models in Disguise
By: Steve Hong, Samuel Belkadi
Potential Business Impact:
Makes AI draw pictures faster and better.
We revisit Visual Autoregressive (VAR) models through the lens of an iterative-refinement framework. Rather than viewing VAR solely as next-scale autoregression, we formalise it as a deterministic forward process that constructs a Laplacian-style latent pyramid, paired with a learned backward process that reconstructs it in a small number of coarse-to-fine steps. This view connects VAR to denoising diffusion and isolates three design choices that help explain its efficiency and fidelity: refining in a learned latent space, casting prediction as discrete classification over code indices, and partitioning the task by spatial frequency. We run controlled experiments to quantify each factor's contribution to fidelity and speed, and we outline how the same framework extends to permutation-invariant graph generation and to probabilistic, ensemble-style medium-range weather forecasting. The framework also suggests practical interfaces for VAR to leverage tools from the diffusion ecosystem while retaining few-step, scale-parallel generation.
Similar Papers
FVAR: Visual Autoregressive Modeling via Next Focus Prediction
CV and Pattern Recognition
Makes pictures clearer by fixing blur.
Markovian Scale Prediction: A New Era of Visual Autoregressive Generation
CV and Pattern Recognition
Makes AI draw pictures faster and use less power.
Visual Autoregressive Modeling for Instruction-Guided Image Editing
CV and Pattern Recognition
Edits pictures perfectly, following your exact words.