Track: Track 1: Original Research/Position/Education/Attention Track
Keywords: Machine Learning Interatomic Potential, MLFF, MLIP
TL;DR: DistMLIP is an efficient, graph-parallel inference platform for machine learning interatomic potentials, with scaling up to a million atoms on 8 GPUs.
Abstract: Large-scale atomistic simulations are essential to bridge computational materials
and chemistry to realistic materials and drug discovery applications. In the past few
years, rapid developments of machine learning interatomic potentials (MLIPs) have
offered a solution to scale up quantum mechanical calculations. Parallelizing these
interatomic potentials across multiple devices poses a challenging, but promising
approach to further extending simulation scales to real-world applications. In this
work, we present DistMLIP, an efficient distributed inference platform for MLIPs
based on zero-redundancy, graph-level parallelization. In contrast to conventional
space-partitioning parallelization, DistMLIP enables efficient MLIP parallelization
through graph partitioning, allowing multi-device inference on flexible MLIP
model architectures like multi-layer graph neural networks. DistMLIP presents
an easy-to-use, flexible, plug-in interface that enables distributed inference of
pre-existing MLIPs. We demonstrate DistMLIP on four widely used and state-of-
the-art MLIPs: CHGNet, MACE, TensorNet, and eSEN. We show that existing
foundation potentials can perform near-million-atom calculations at the scale of a
few seconds on 8 GPUs with DistMLIP.
Submission Number: 100
Loading