Score: 2

DistMLIP: A Distributed Inference Platform for Machine Learning Interatomic Potentials

Published: May 28, 2025 | arXiv ID: 2506.02023v1

By: Kevin Han , Bowen Deng , Amir Barati Farimani and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes computer models of materials run much faster.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Large-scale atomistic simulations are essential to bridge computational materials and chemistry to realistic materials and drug discovery applications. In the past few years, rapid developments of machine learning interatomic potentials (MLIPs) have offered a solution to scale up quantum mechanical calculations. Parallelizing these interatomic potentials across multiple devices poses a challenging, but promising approach to further extending simulation scales to real-world applications. In this work, we present DistMLIP, an efficient distributed inference platform for MLIPs based on zero-redundancy, graph-level parallelization. In contrast to conventional space-partitioning parallelization, DistMLIP enables efficient MLIP parallelization through graph partitioning, allowing multi-device inference on flexible MLIP model architectures like multi-layer graph neural networks. DistMLIP presents an easy-to-use, flexible, plug-in interface that enables distributed inference of pre-existing MLIPs. We demonstrate DistMLIP on four widely used and state-of-the-art MLIPs: CHGNet, MACE, TensorNet, and eSEN. We show that existing foundational potentials can perform near-million-atom calculations at the scale of a few seconds on 8 GPUs with DistMLIP.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing