DEQuify your force field: Towards efficient simulations using deep equilibrium models

20 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning Force Fields, Deep Equilibrium Models
TL;DR: We speed up molecular dynamics simulations by turning a SOTA machine learning force field architecture into a deep equilibrium model.
Abstract: Machine learning force fields show great promise in enabling more accurate force fields than manually derived ones for molecular dynamics simulations. State-of-the-art approaches for ML force fields stack many equivariant graph neural network layers, resulting in long inference times and high memory costs. This work aims to improve these two aspects while simultaneously reaching higher accuracy. Our key observation is that successive states in molecular dynamics simulations are extremely similar, but typical architectures treat each step independently, disregarding this information. We show how deep equilibrium models (DEQs) can exploit this temporal correlation by recycling neural network features from previous time steps. Specifically, we turn a state-of-the-art force field architecture into a DEQ, enabling us to improve both accuracy and speed by $10\%-20\%$ on the MD17, MD22, and OC20 200k datasets. Compared to conventional approaches, DEQs are also naturally more memory efficient, facilitating the training of more expressive models on larger systems given limited GPU memory resources.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1990
Loading