Latent Nonlinear Denoising Score Matching for Enhanced Learning of Structured Distributions
By: Kaichen Shen, Wei Zhu
Potential Business Impact:
Makes AI create better, more varied pictures faster.
We present latent nonlinear denoising score matching (LNDSM), a novel training objective for score-based generative models that integrates nonlinear forward dynamics with the VAE-based latent SGM framework. This combination is achieved by reformulating the cross-entropy term using the approximate Gaussian transition induced by the Euler-Maruyama scheme. To ensure numerical stability, we identify and remove two zero-mean but variance exploding terms arising from small time steps. Experiments on variants of the MNIST dataset demonstrate that the proposed method achieves faster synthesis and enhanced learning of inherently structured distributions. Compared to benchmark structure-agnostic latent SGMs, LNDSM consistently attains superior sample quality and variability.
Similar Papers
Latent Diffusion Model Based Denoising Receiver for 6G Semantic Communication: From Stochastic Differential Theory to Application
Machine Learning (CS)
Makes messages clear even with bad signals.
Skewness-Robust Causal Discovery in Location-Scale Noise Models
Machine Learning (Stat)
Finds what causes what, even with messy data.
Expressive Score-Based Priors for Distribution Matching with Geometry-Preserving Regularization
Machine Learning (CS)
Makes computer learning fairer and more stable.