FVAR: Visual Autoregressive Modeling via Next Focus Prediction
By: Xiaofan Li , Chenming Wu , Yanpeng Sun and more
Potential Business Impact:
Makes pictures clearer by fixing blur.
Visual autoregressive models achieve remarkable generation quality through next-scale predictions across multi-scale token pyramids. However, the conventional method uses uniform scale downsampling to build these pyramids, leading to aliasing artifacts that compromise fine details and introduce unwanted jaggies and moiré patterns. To tackle this issue, we present \textbf{FVAR}, which reframes the paradigm from \emph{next-scale prediction} to \emph{next-focus prediction}, mimicking the natural process of camera focusing from blur to clarity. Our approach introduces three key innovations: \textbf{1) Next-Focus Prediction Paradigm} that transforms multi-scale autoregression by progressively reducing blur rather than simply downsampling; \textbf{2) Progressive Refocusing Pyramid Construction} that uses physics-consistent defocus kernels to build clean, alias-free multi-scale representations; and \textbf{3) High-Frequency Residual Learning} that employs a specialized residual teacher network to effectively incorporate alias information during training while maintaining deployment simplicity. Specifically, we construct optical low-pass views using defocus point spread function (PSF) kernels with decreasing radius, creating smooth blur-to-clarity transitions that eliminate aliasing at its source. To further enhance detail generation, we introduce a High-Frequency Residual Teacher that learns from both clean structure and alias residuals, distilling this knowledge to a vanilla VAR deployment network for seamless inference. Extensive experiments on ImageNet demonstrate that FVAR substantially reduces aliasing artifacts, improves fine detail preservation, and enhances text readability, achieving superior performance with perfect compatibility to existing VAR frameworks.
Similar Papers
Markovian Scale Prediction: A New Era of Visual Autoregressive Generation
CV and Pattern Recognition
Makes AI draw pictures faster and use less power.
Multi-scale Autoregressive Models are Laplacian, Discrete, and Latent Diffusion Models in Disguise
Machine Learning (CS)
Makes AI draw pictures faster and better.
Visual Autoregressive Modeling for Instruction-Guided Image Editing
CV and Pattern Recognition
Edits pictures perfectly, following your exact words.