Provable Separations between Memorization and Generalization in Diffusion Models
By: Zeqi Ye , Qijie Zhu , Molei Tao and more
Potential Business Impact:
Stops AI from copying its training pictures.
Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers.
Similar Papers
Provable Separations between Memorization and Generalization in Diffusion Models
Machine Learning (Stat)
Stops AI from copying its training pictures.
On the Edge of Memorization in Diffusion Models
Machine Learning (CS)
Helps AI learn without copying its training pictures.
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
Machine Learning (CS)
Stops AI from copying itself when making new pictures.