Reinforced sequential Monte Carlo for amortised sampling
By: Sanghyeok Choi , Sarthak Mittal , Víctor Elvira and more
Potential Business Impact:
Helps computers learn complex patterns faster.
This paper proposes a synergy of amortised and particle-based methods for sampling from distributions defined by unnormalised density functions. We state a connection between sequential Monte Carlo (SMC) and neural sequential samplers trained by maximum-entropy reinforcement learning (MaxEnt RL), wherein learnt sampling policies and value functions define proposal kernels and twist functions. Exploiting this connection, we introduce an off-policy RL training procedure for the sampler that uses samples from SMC -- using the learnt sampler as a proposal -- as a behaviour policy that better explores the target distribution. We describe techniques for stable joint training of proposals and twist functions and an adaptive weight tempering scheme to reduce training signal variance. Furthermore, building upon past attempts to use experience replay to guide the training of neural samplers, we derive a way to combine historical samples with annealed importance sampling weights within a replay buffer. On synthetic multi-modal targets (in both continuous and discrete spaces) and the Boltzmann distribution of alanine dipeptide conformations, we demonstrate improvements in approximating the true distribution as well as training stability compared to both amortised and Monte Carlo methods.
Similar Papers
Nonlocal Monte Carlo via Reinforcement Learning
Machine Learning (CS)
Helps computers solve hard problems faster.
Amortized Sampling with Transferable Normalizing Flows
Machine Learning (CS)
Teaches computers to predict how molecules will move.
Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo
Machine Learning (CS)
Makes AI create realistic images from messy data.