Quantifying the Ease of Reproducing Training Data in Unconditional Diffusion Models
By: Masaya Hasegawa, Koji Yasuda
Potential Business Impact:
Finds and fixes art that's too easy to copy.
Diffusion models, which have been advancing rapidly in recent years, may generate samples that closely resemble the training data. This phenomenon, known as memorization, may lead to copyright issues. In this study, we propose a method to quantify the ease of reproducing training data in unconditional diffusion models. The average of a sample population following the Langevin equation in the reverse diffusion process moves according to a first-order ordinary differential equation (ODE). This ODE establishes a 1-to-1 correspondence between images and their noisy counterparts in the latent space. Since the ODE is reversible and the initial noisy images are sampled randomly, the volume of an image's projected area represents the probability of generating those images. We examined the ODE, which projects images to latent space, and succeeded in quantifying the ease of reproducing training data by measuring the volume growth rate in this process. Given the relatively low computational complexity of this method, it allows us to enhance the quality of training data by detecting and modifying the easily memorized training samples.
Similar Papers
Geometric Regularity in Deterministic Sampling of Diffusion-based Generative Models
Machine Learning (CS)
Makes AI create better pictures faster.
Diffusion models under low-noise regime
CV and Pattern Recognition
Helps AI make better pictures by learning from less data.
Reconstruction-Free Anomaly Detection with Diffusion Models
CV and Pattern Recognition
Finds weird things in pictures much faster.