Score: 1

Quantifying the Ease of Reproducing Training Data in Unconditional Diffusion Models

Published: March 25, 2025 | arXiv ID: 2503.19429v1

By: Masaya Hasegawa, Koji Yasuda

Potential Business Impact:

Finds and fixes art that's too easy to copy.

Business Areas:
Motion Capture Media and Entertainment, Video

Diffusion models, which have been advancing rapidly in recent years, may generate samples that closely resemble the training data. This phenomenon, known as memorization, may lead to copyright issues. In this study, we propose a method to quantify the ease of reproducing training data in unconditional diffusion models. The average of a sample population following the Langevin equation in the reverse diffusion process moves according to a first-order ordinary differential equation (ODE). This ODE establishes a 1-to-1 correspondence between images and their noisy counterparts in the latent space. Since the ODE is reversible and the initial noisy images are sampled randomly, the volume of an image's projected area represents the probability of generating those images. We examined the ODE, which projects images to latent space, and succeeded in quantifying the ease of reproducing training data by measuring the volume growth rate in this process. Given the relatively low computational complexity of this method, it allows us to enhance the quality of training data by detecting and modifying the easily memorized training samples.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)