Convergence and Stability Analysis of Self-Consuming Generative Models with Heterogeneous Human Curation
By: Hongru Zhao, Jinwen Fu, Tuan Pham
Potential Business Impact:
Teaches computers to learn from their own mistakes.
Self-consuming generative models have received significant attention over the last few years. In this paper, we study a self-consuming generative model with heterogeneous preferences that is a generalization of the model in Ferbach et al. (2024). The model is retrained round by round using real data and its previous-round synthetic outputs. The asymptotic behavior of the retraining dynamics is investigated across four regimes using different techniques including the nonlinear Perron--Frobenius theory. Our analyses improve upon that of Ferbach et al. (2024) and provide convergence results in settings where the well-known Banach contraction mapping arguments do not apply. Stability and non-stability results regarding the retraining dynamics are also given.
Similar Papers
Self-Consuming Generative Models with Adversarially Curated Data
Machine Learning (CS)
Makes AI models learn wrong things from bad data.
Convergence Dynamics and Stabilization Strategies of Co-Evolving Generative Models
Machine Learning (CS)
AI models learn together, but can get stuck.
Stabilizing Self-Consuming Diffusion Models with Latent Space Filtering
Machine Learning (CS)
Filters bad computer-made pictures to improve AI.