From Sketch to Fresco: Efficient Diffusion Transformer with Progressive Resolution
By: Shikang Zheng , Guantao Chen , Lixuan He and more
Potential Business Impact:
Makes AI art and videos faster to create.
Diffusion Transformers achieve impressive generative quality but remain computationally expensive due to iterative sampling. Recently, dynamic resolution sampling has emerged as a promising acceleration technique by reducing the resolution of early sampling steps. However, existing methods rely on heuristic re-noising at every resolution transition, injecting noise that breaks cross-stage consistency and forces the model to relearn global structure. In addition, these methods indiscriminately upsample the entire latent space at once without checking which regions have actually converged, causing accumulated errors, and visible artifacts. Therefore, we propose \textbf{Fresco}, a dynamic resolution framework that unifies re-noise and global structure across stages with progressive upsampling, preserving both the efficiency of low-resolution drafting and the fidelity of high-resolution refinement, with all stages aligned toward the same final target. Fresco achieves near-lossless acceleration across diverse domains and models, including 10$\times$ speedup on FLUX, and 5$\times$ on HunyuanVideo, while remaining orthogonal to distillation, quantization and feature caching, reaching 22$\times$ speedup when combined with distilled models. Our code is in supplementary material and will be released on Github.
Similar Papers
UltraImage: Rethinking Resolution Extrapolation in Image Diffusion Transformers
CV and Pattern Recognition
Makes AI create much bigger, clearer pictures.
NeuralRemaster: Phase-Preserving Diffusion for Structure-Aligned Generation
CV and Pattern Recognition
Keeps pictures' shapes while changing them.
NeuralRemaster: Phase-Preserving Diffusion for Structure-Aligned Generation
CV and Pattern Recognition
Keeps pictures' shapes while changing them.