Boosted GFlowNets: Improving Exploration via Sequential Learning
By: Pedro Dall'Antonia , Tiago da Silva , Daniel Augusto de Souza and more
Potential Business Impact:
Finds rare, valuable things by exploring better.
Generative Flow Networks (GFlowNets) are powerful samplers for compositional objects that, by design, sample proportionally to a given non-negative reward. Nonetheless, in practice, they often struggle to explore the reward landscape evenly: trajectories toward easy-to-reach regions dominate training, while hard-to-reach modes receive vanishing or uninformative gradients, leading to poor coverage of high-reward areas. We address this imbalance with Boosted GFlowNets, a method that sequentially trains an ensemble of GFlowNets, each optimizing a residual reward that compensates for the mass already captured by previous models. This residual principle reactivates learning signals in underexplored regions and, under mild assumptions, ensures a monotone non-degradation property: adding boosters cannot worsen the learned distribution and typically improves it. Empirically, Boosted GFlowNets achieve substantially better exploration and sample diversity on multimodal synthetic benchmarks and peptide design tasks, while preserving the stability and simplicity of standard trajectory-balance training.
Similar Papers
Interpreting GFlowNets for Drug Discovery: Extracting Actionable Insights for Medicinal Chemistry
Machine Learning (CS)
Shows how computers design new medicines.
Secrets of GFlowNets' Learning Behavior: A Theoretical Study
Machine Learning (CS)
Helps AI learn to create new things better.
GFlowNets for Active Learning Based Resource Allocation in Next Generation Wireless Networks
Machine Learning (CS)
Improves wireless signals for better communication and sensing.