Score: 0

Provable Separations between Memorization and Generalization in Diffusion Models

Published: November 5, 2025 | arXiv ID: 2511.03202v1

By: Zeqi Ye , Qijie Zhu , Molei Tao and more

Potential Business Impact:

Stops AI from copying its training pictures.

Business Areas:
A/B Testing Data and Analytics

Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers.

Country of Origin
🇺🇸 United States

Page Count
51 pages

Category
Statistics:
Machine Learning (Stat)