Guiding Noisy Label Conditional Diffusion Models with Score-based Discriminator Correction
By: Dat Nguyen Cong, Hieu Tran Bao, Hoang Thanh-Tung
Potential Business Impact:
Fixes bad data to make AI art better.
Diffusion models have gained prominence as state-of-the-art techniques for synthesizing images and videos, particularly due to their ability to scale effectively with large datasets. Recent studies have uncovered that these extensive datasets often contain mistakes from manual labeling processes. However, the extent to which such errors compromise the generative capabilities and controllability of diffusion models is not well studied. This paper introduces Score-based Discriminator Correction (SBDC), a guidance technique for aligning noisy pre-trained conditional diffusion models. The guidance is built on discriminator training using adversarial loss, drawing on prior noise detection techniques to assess the authenticity of each sample. We further show that limiting the usage of our guidance to the early phase of the generation process leads to better performance. Our method is computationally efficient, only marginally increases inference time, and does not require retraining diffusion models. Experiments on different noise settings demonstrate the superiority of our method over previous state-of-the-art methods.
Similar Papers
Adaptive and Iterative Point Cloud Denoising with Score-Based Diffusion Model
CV and Pattern Recognition
Cleans up messy 3D scans, keeping details sharp.
Denoising Score Distillation: From Noisy Diffusion Pretraining to One-Step High-Quality Generation
Machine Learning (CS)
Creates good pictures from bad data.
MAD: Manifold Attracted Diffusion
Machine Learning (Stat)
Makes blurry pictures sharp and clear.