Implicit score matching meets denoising score matching: improved rates of convergence and log-density Hessian estimation
By: Konstantin Yakovlev, Anna Markovich, Nikita Puchkin
We study the problem of estimating the score function using both implicit score matching and denoising score matching. Assuming that the data distribution exhibiting a low-dimensional structure, we prove that implicit score matching is able not only to adapt to the intrinsic dimension, but also to achieve the same rates of convergence as denoising score matching in terms of the sample size. Furthermore, we demonstrate that both methods allow us to estimate log-density Hessians without the curse of dimensionality by simple differentiation. This justifies convergence of ODE-based samplers for generative diffusion models. Our approach is based on Gagliardo-Nirenberg-type inequalities relating weighted $L^2$-norms of smooth functions and their derivatives.
Similar Papers
Why Heuristic Weighting Works: A Theoretical Analysis of Denoising Score Matching
Machine Learning (CS)
Makes AI better at cleaning up messy pictures.
Convergence Dynamics of Over-Parameterized Score Matching for a Single Gaussian
Machine Learning (CS)
Teaches computers to learn patterns from data.
DDPM Score Matching and Distribution Learning
Machine Learning (Stat)
Makes AI better at guessing data patterns.