Mitigating the Noise Shift for Denoising Generative Models via Noise Awareness Guidance
By: Jincheng Zhong , Boyuan Jiang , Xin Tao and more
Potential Business Impact:
Fixes AI image makers to create better pictures.
Existing denoising generative models rely on solving discretized reverse-time SDEs or ODEs. In this paper, we identify a long-overlooked yet pervasive issue in this family of models: a misalignment between the pre-defined noise level and the actual noise level encoded in intermediate states during sampling. We refer to this misalignment as noise shift. Through empirical analysis, we demonstrate that noise shift is widespread in modern diffusion models and exhibits a systematic bias, leading to sub-optimal generation due to both out-of-distribution generalization and inaccurate denoising updates. To address this problem, we propose Noise Awareness Guidance (NAG), a simple yet effective correction method that explicitly steers sampling trajectories to remain consistent with the pre-defined noise schedule. We further introduce a classifier-free variant of NAG, which jointly trains a noise-conditional and a noise-unconditional model via noise-condition dropout, thereby eliminating the need for external classifiers. Extensive experiments, including ImageNet generation and various supervised fine-tuning tasks, show that NAG consistently mitigates noise shift and substantially improves the generation quality of mainstream diffusion models.
Similar Papers
Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization
Machine Learning (CS)
Helps computers learn even with wrong answers.
Noise Projection: Closing the Prompt-Agnostic Gap Behind Text-to-Image Misalignment in Diffusion Models
CV and Pattern Recognition
Makes AI pictures match words better.
NoiseShift: Resolution-Aware Noise Recalibration for Better Low-Resolution Image Generation
CV and Pattern Recognition
Makes AI art look good at any size.