A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
By: Lianghe Shi , Meng Wu , Huijie Zhang and more
Potential Business Impact:
Stops AI from copying itself when making new pictures.
The widespread use of diffusion models has led to an abundance of AI-generated data, raising concerns about model collapse -- a phenomenon in which recursive iterations of training on synthetic data lead to performance degradation. Prior work primarily characterizes this collapse via variance shrinkage or distribution shift, but these perspectives miss practical manifestations of model collapse. This paper identifies a transition from generalization to memorization during model collapse in diffusion models, where models increasingly replicate training data instead of generating novel content during iterative training on synthetic samples. This transition is directly driven by the declining entropy of the synthetic training data produced in each training cycle, which serves as a clear indicator of model degradation. Motivated by this insight, we propose an entropy-based data selection strategy to mitigate the transition from generalization to memorization and alleviate model collapse. Empirical results show that our approach significantly enhances visual quality and diversity in recursive generation, effectively preventing collapse.
Similar Papers
Multi-modal Synthetic Data Training and Model Collapse: Insights from VLMs and Diffusion Models
Machine Learning (CS)
Keeps AI from getting worse when it learns.
On the Edge of Memorization in Diffusion Models
Machine Learning (CS)
Helps AI learn without copying its training pictures.
Position: Model Collapse Does Not Mean What You Think
Machine Learning (CS)
AI models won't get worse from making fake stuff.