EMAG: Self-Rectifying Diffusion Sampling with Exponential Moving Average Guidance
By: Ankit Yadav, Ta Duc Huy, Lingqiao Liu
In diffusion and flow-matching generative models, guidance techniques are widely used to improve sample quality and consistency. Classifier-free guidance (CFG) is the de facto choice in modern systems and achieves this by contrasting conditional and unconditional samples. Recent work explores contrasting negative samples at inference using a weaker model, via strong/weak model pairs, attention-based masking, stochastic block dropping, or perturbations to the self-attention energy landscape. While these strategies refine the generation quality, they still lack reliable control over the granularity or difficulty of the negative samples, and target-layer selection is often fixed. We propose Exponential Moving Average Guidance (EMAG), a training-free mechanism that modifies attention at inference time in diffusion transformers, with a statistics-based, adaptive layer-selection rule. Unlike prior methods, EMAG produces harder, semantically faithful negatives (fine-grained degradations), surfacing difficult failure modes, enabling the denoiser to refine subtle artifacts, boosting the quality and human preference score (HPS) by +0.46 over CFG. We further demonstrate that EMAG naturally composes with advanced guidance techniques, such as APG and CADS, further improving HPS.
Similar Papers
Entropy Rectifying Guidance for Diffusion and Flow Models
CV and Pattern Recognition
Makes AI pictures better, more varied, and accurate.
S$^2$-Guidance: Stochastic Self Guidance for Training-Free Enhancement of Diffusion Models
CV and Pattern Recognition
Makes AI images and videos look better.
S^2-Guidance: Stochastic Self Guidance for Training-Free Enhancement of Diffusion Models
CV and Pattern Recognition
Makes AI images and videos look better.