Weak Diffusion Priors Can Still Achieve Strong Inverse-Problem Performance
By: Jing Jia , Wei Yuan , Sifan Liu and more
Potential Business Impact:
Makes AI better at fixing blurry pictures.
Can a diffusion model trained on bedrooms recover human faces? Diffusion models are widely used as priors for inverse problems, but standard approaches usually assume a high-fidelity model trained on data that closely match the unknown signal. In practice, one often must use a mismatched or low-fidelity diffusion prior. Surprisingly, these weak priors often perform nearly as well as full-strength, in-domain baselines. We study when and why inverse solvers are robust to weak diffusion priors. Through extensive experiments, we find that weak priors succeed when measurements are highly informative (e.g., many observed pixels), and we identify regimes where they fail. Our theory, based on Bayesian consistency, gives conditions under which high-dimensional measurements make the posterior concentrate near the true signal. These results provide a principled justification on when weak diffusion priors can be used reliably.
Similar Papers
Deep generative priors for 3D brain analysis
CV and Pattern Recognition
Improves brain scans by learning anatomy from data.
Diffusion models for inverse problems
Machine Learning (CS)
Makes blurry pictures clear using smart computer tricks.
When are Diffusion Priors Helpful in Sparse Reconstruction? A Study with Sparse-view CT
Medical Physics
Makes blurry medical scans clearer with less data.