Learning Robust Diffusion Models from Imprecise Supervision
By: Dong-Dong Wu , Jiacheng Cui , Wei Wang and more
Potential Business Impact:
Teaches computers to create better pictures from messy data.
Conditional diffusion models have achieved remarkable success in various generative tasks recently, but their training typically relies on large-scale datasets that inevitably contain imprecise information in conditional inputs. Such supervision, often stemming from noisy, ambiguous, or incomplete labels, will cause condition mismatch and degrade generation quality. To address this challenge, we propose DMIS, a unified framework for training robust Diffusion Models from Imprecise Supervision, which is the first systematic study within diffusion models. Our framework is derived from likelihood maximization and decomposes the objective into generative and classification components: the generative component models imprecise-label distributions, while the classification component leverages a diffusion classifier to infer class-posterior probabilities, with its efficiency further improved by an optimized timestep sampling strategy. Extensive experiments on diverse forms of imprecise supervision, covering tasks of image generation, weakly supervised learning, and noisy dataset condensation demonstrate that DMIS consistently produces high-quality and class-discriminative samples.
Similar Papers
Advancing Image Classification with Discrete Diffusion Classification Modeling
CV and Pattern Recognition
Helps computers guess pictures better, even when unsure.
MissDDIM: Deterministic and Efficient Conditional Diffusion for Tabular Data Imputation
Artificial Intelligence
Fills in missing table data quickly and reliably.
Distillation of Discrete Diffusion by Exact Conditional Distribution Matching
Machine Learning (CS)
Makes AI create pictures much faster.