Score: 0

DiffIER: Optimizing Diffusion Models with Iterative Error Reduction

Published: August 19, 2025 | arXiv ID: 2508.13628v2

By: Ao Chen, Lihe Ding, Tianfan Xue

Potential Business Impact:

Makes AI pictures and sounds much better.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Diffusion models have demonstrated remarkable capabilities in generating high-quality samples and enhancing performance across diverse domains through Classifier-Free Guidance (CFG). However, the quality of generated samples is highly sensitive to the selection of the guidance weight. In this work, we identify a critical ``training-inference gap'' and we argue that it is the presence of this gap that undermines the performance of conditional generation and renders outputs highly sensitive to the guidance weight. We quantify this gap by measuring the accumulated error during the inference stage and establish a correlation between the selection of guidance weight and minimizing this gap. Furthermore, to mitigate this gap, we propose DiffIER, an optimization-based method for high-quality generation. We demonstrate that the accumulated error can be effectively reduced by an iterative error minimization at each step during inference. By introducing this novel plug-and-play optimization framework, we enable the optimization of errors at every single inference step and enhance generation quality. Empirical results demonstrate that our proposed method outperforms baseline approaches in conditional generation tasks. Furthermore, the method achieves consistent success in text-to-image generation, image super-resolution, and text-to-speech generation, underscoring its versatility and potential for broad applications in future research.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition