gfnx: Fast and Scalable Library for Generative Flow Networks in JAX
By: Daniil Tiapkin , Artem Agarkov , Nikita Morozov and more
Potential Business Impact:
Makes AI learn and create things much faster.
In this paper, we present gfnx, a fast and scalable package for training and evaluating Generative Flow Networks (GFlowNets) written in JAX. gfnx provides an extensive set of environments and metrics for benchmarking, accompanied with single-file implementations of core objectives for training GFlowNets. We include synthetic hypergrids, multiple sequence generation environments with various editing regimes and particular reward designs for molecular generation, phylogenetic tree construction, Bayesian structure learning, and sampling from the Ising model energy. Across different tasks, gfnx achieves significant wall-clock speedups compared to Pytorch-based benchmarks (such as torchgfn library) and author implementations. For example, gfnx achieves up to 55 times speedup on CPU-based sequence generation environments, and up to 80 times speedup with the GPU-based Bayesian network structure learning setup. Our package provides a diverse set of benchmarks and aims to standardize empirical evaluation and accelerate research and applications of GFlowNets. The library is available on GitHub (https://github.com/d-tiapkin/gfnx) and on pypi (https://pypi.org/project/gfnx/). Documentation is available on https://gfnx.readthedocs.io.
Similar Papers
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation
Machine Learning (CS)
Finds new medicines by building molecules atom by atom.
Proxy-Free GFlowNet
Machine Learning (CS)
Teaches computers to find best ideas faster.
Boosted GFlowNets: Improving Exploration via Sequential Learning
Machine Learning (CS)
Finds rare, valuable things by exploring better.