Score: 2

gfnx: Fast and Scalable Library for Generative Flow Networks in JAX

Published: November 20, 2025 | arXiv ID: 2511.16592v1

By: Daniil Tiapkin , Artem Agarkov , Nikita Morozov and more

Potential Business Impact:

Makes AI learn and create things much faster.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

In this paper, we present gfnx, a fast and scalable package for training and evaluating Generative Flow Networks (GFlowNets) written in JAX. gfnx provides an extensive set of environments and metrics for benchmarking, accompanied with single-file implementations of core objectives for training GFlowNets. We include synthetic hypergrids, multiple sequence generation environments with various editing regimes and particular reward designs for molecular generation, phylogenetic tree construction, Bayesian structure learning, and sampling from the Ising model energy. Across different tasks, gfnx achieves significant wall-clock speedups compared to Pytorch-based benchmarks (such as torchgfn library) and author implementations. For example, gfnx achieves up to 55 times speedup on CPU-based sequence generation environments, and up to 80 times speedup with the GPU-based Bayesian network structure learning setup. Our package provides a diverse set of benchmarks and aims to standardize empirical evaluation and accelerate research and applications of GFlowNets. The library is available on GitHub (https://github.com/d-tiapkin/gfnx) and on pypi (https://pypi.org/project/gfnx/). Documentation is available on https://gfnx.readthedocs.io.

Country of Origin
🇫🇷 🇷🇺 Russian Federation, France

Repos / Data Links

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)