Score: 0

Solving Bayesian inverse problems with diffusion priors and off-policy RL

Published: March 12, 2025 | arXiv ID: 2503.09746v1

By: Luca Scimeca , Siddarth Venkatraman , Moksh Jain and more

Potential Business Impact:

Makes AI better at solving hard science puzzles.

Business Areas:
A/B Testing Data and Analytics

This paper presents a practical application of Relative Trajectory Balance (RTB), a recently introduced off-policy reinforcement learning (RL) objective that can asymptotically solve Bayesian inverse problems optimally. We extend the original work by using RTB to train conditional diffusion model posteriors from pretrained unconditional priors for challenging linear and non-linear inverse problems in vision, and science. We use the objective alongside techniques such as off-policy backtracking exploration to improve training. Importantly, our results show that existing training-free diffusion posterior methods struggle to perform effective posterior inference in latent space due to inherent biases.

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)