Score: 0

Learning Robust Diffusion Models from Imprecise Supervision

Published: October 3, 2025 | arXiv ID: 2510.03016v1

By: Dong-Dong Wu , Jiacheng Cui , Wei Wang and more

Potential Business Impact:

Teaches computers to create better pictures from messy data.

Business Areas:
Simulation Software

Conditional diffusion models have achieved remarkable success in various generative tasks recently, but their training typically relies on large-scale datasets that inevitably contain imprecise information in conditional inputs. Such supervision, often stemming from noisy, ambiguous, or incomplete labels, will cause condition mismatch and degrade generation quality. To address this challenge, we propose DMIS, a unified framework for training robust Diffusion Models from Imprecise Supervision, which is the first systematic study within diffusion models. Our framework is derived from likelihood maximization and decomposes the objective into generative and classification components: the generative component models imprecise-label distributions, while the classification component leverages a diffusion classifier to infer class-posterior probabilities, with its efficiency further improved by an optimized timestep sampling strategy. Extensive experiments on diverse forms of imprecise supervision, covering tasks of image generation, weakly supervised learning, and noisy dataset condensation demonstrate that DMIS consistently produces high-quality and class-discriminative samples.

Page Count
37 pages

Category
Computer Science:
Machine Learning (CS)