Keywords: machine learning, machine-learned interatomic potentials, atomistic simulations, geometric machine learning, graph neural networks, unconstrained, equivariant
TL;DR: Unconstrained machine-learned interatomic potentials achieve state-of-the-art accuracy and efficiency for atomic-scale systems
Abstract: Machine-learned interatomic potentials (MLIPs) are increasingly used to replace computationally demanding electronic-structure calculations to model matter at the atomic scale. The most commonly used model architectures are constrained to fulfill exactly a number of physical laws, from geometric symmetries to energy conservation. Evidence is mounting that relaxing some of these constraints can be beneficial to the efficiency and (somewhat surprisingly) accuracy of MLIPs, even though care should be taken to avoid qualitative failures associated with the breaking of physical symmetries. Given the irresistible trend of scaling models up to larger numbers of parameters and training configurations, a very important question is how unconstrained MLIPs behave in this limit. Here we investigate this issue, showing that — when trained on some of the current large-scale datasets — unconstrained models can be competitive in accuracy and superior in speed when compared to physically constrained models. We assess these models both in terms of benchmark accuracy and in terms of usability in practical scenarios, focusing on static simulation workflows such as geometry optimization and lattice dynamics. We conclude that accurate unconstrained models can be applied with confidence, especially given that simple inference-time modifications can be used to recover observables that are fully consistent with the relevant physical symmetries.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 10419
Loading