Data-Dependent Smoothing for Protein Discovery with Walk-Jump Sampling
By: Srinivas Anumasa , Barath Chandran. C , Tingting Chen and more
Potential Business Impact:
Makes AI create better protein designs.
Diffusion models have emerged as a powerful class of generative models by learning to iteratively reverse the noising process. Their ability to generate high-quality samples has extended beyond high-dimensional image data to other complex domains such as proteins, where data distributions are typically sparse and unevenly spread. Importantly, the sparsity itself is uneven. Empirically, we observed that while a small fraction of samples lie in dense clusters, the majority occupy regions of varying sparsity across the data space. Existing approaches largely ignore this data-dependent variability. In this work, we introduce a Data-Dependent Smoothing Walk-Jump framework that employs kernel density estimation (KDE) as a preprocessing step to estimate the noise scale $\sigma$ for each data point, followed by training a score model with these data-dependent $\sigma$ values. By incorporating local data geometry into the denoising process, our method accounts for the heterogeneous distribution of protein data. Empirical evaluations demonstrate that our approach yields consistent improvements across multiple metrics, highlighting the importance of data-aware sigma prediction for generative modeling in sparse, high-dimensional settings.
Similar Papers
Generative modelling with jump-diffusions
Machine Learning (CS)
Makes AI create more realistic pictures and sounds.
Dimension-Free Convergence of Diffusion Models for Approximate Gaussian Mixtures
Machine Learning (CS)
Makes AI create realistic pictures faster.
Assessing the Quality of Denoising Diffusion Models in Wasserstein Distance: Noisy Score and Optimal Bounds
Machine Learning (Stat)
Makes AI create better pictures from messy data.