Rethinking Diffusion Model in High Dimension
By: Zhenxin Zheng, Zhenjie Zheng
Potential Business Impact:
Makes computers create realistic pictures from simple ideas.
Curse of Dimensionality is an unavoidable challenge in statistical probability models, yet diffusion models seem to overcome this limitation, achieving impressive results in high-dimensional data generation. Diffusion models assume that they can learn the statistical properties of the underlying probability distribution, enabling sampling from this distribution to generate realistic samples. But is this really how they work? To address this question, this paper conducts a detailed analysis of the objective function and inference methods of diffusion models, leading to several important conclusions that help answer the above question: 1) In high-dimensional sparse scenarios, the target of the objective function fitting degrades from a weighted sum of multiple samples to a single sample. 2) The mainstream inference methods can all be represented within a simple unified framework, without requiring statistical concepts such as Markov chains and SDE, while aligning with the degraded objective function. 3) Guided by this simple framework, more efficient inference methods can be discovered.
Similar Papers
Localized Diffusion Models for High Dimensional Distributions Generation
Machine Learning (CS)
Makes AI art generators work better with less data.
Deep Diffusion Maps
Machine Learning (CS)
Makes computers understand huge amounts of data faster.
Dimension-Free Convergence of Diffusion Models for Approximate Gaussian Mixtures
Machine Learning (CS)
Makes AI create realistic pictures faster.