ELoRA: Low-Rank Adaptation for Equivariant GNNs

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Pre-trained interatomic potentials have become a new paradigm for atomistic materials simulations, enabling accurate and efficient predictions across diverse chemical systems. Despite their promise, fine-tuning is often required for complex tasks to achieve high accuracy. Traditional parameter-efficient fine-tuning approaches are effective in NLP and CV. However, when applied to SO(3) equivariant pre-trained interatomic potentials, these methods will inevitably break equivariance—a critical property for preserving physical symmetries. In this paper, we introduce ELoRA (Equivariant Low-Rank Adaptation), a novel fine-tuning method designed specifically for SO(3) equivariant Graph Neural Networks (GNNs), the backbones in multiple pre-trained interatomic potentials. ELoRA adopts a path-dependent decomposition for weights updating which offers two key advantages: (1) it preserves SO(3) equivariance throughout the fine-tuning process, ensuring physically consistent predictions, and (2) it leverages low-rank adaptations to significantly improve data efficiency. We prove that ELoRA maintains equivariance and demonstrate its effectiveness through comprehensive experiments. On the rMD17 organic dataset, ELoRA achieves a 25.5\% improvement in energy prediction accuracy and a 23.7\% improvement in force prediction accuracy compared to full-parameter fine-tuning. Similarly, across 10 inorganic datasets, ELoRA achieves average improvements of 12.3\% and 14.4\% in energy and force predictions, respectively. Code will be made publicly available at https://github.com/hyjwpk/ELoRA.
Lay Summary: Designing a better battery or medicine starts with understanding how atoms interact. AI tools are increasingly used to predict these atomic behaviors, helping speed up the discovery process. But they often struggle when the material is new or especially complex. To improve them, scientists usually make small adjustments using new data. The problem is, many of these adjustments accidentally break an important physical rule: if you rotate the molecule, the prediction should rotate too. We present a new method called ELoRA that avoids this issue. It carefully updates the AI while making sure the model still respects this rotational symmetry. Surprisingly, it also works well with only a small amount of new data. In experiments, ELoRA makes predictions significantly more accurate across a variety of chemical systems. This could make AI tools more dependable for studying new materials in scientific research and industry.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/hyjwpk/ELoRA
Primary Area: Applications->Chemistry, Physics, and Earth Sciences
Keywords: graph neural network, equivariance, parameter-efficient fine-tuning, low-rank adaptation, interatomic potential
Submission Number: 14791
Loading