Keywords: machine learning interatomic potentials, equivariance, sparsity-promoting
TL;DR: This work proposes a sparsity-promoting fine-tuning method for equivariant MLIPs that integrates equivariant constraints with selective parameter pruning.
Abstract: Pre-trained materials foundation models, or machine learning interatomic potentials, leverage general physicochemical knowledge to effectively approximate potential energy surfaces. However, they often require domain-specific calibration due to physicochemical diversity and mismatches between practical computational settings and those used in constructing the pre-training data. We propose a sparsity-promoting fine-tuning method for E(3)-equivariant materials foundation models that prune low-contribution parameters during training. Across molecular and crystalline benchmarks, our approach updates only 3 % of parameters, and in some cases as little as 0.5 %, while matching or exceeding the accuracy of full fine-tuning. Beyond energy and force calibration, we apply our method to magnetic moment prediction and magnetism-aware total energy estimation, achieving broader applicability of materials foundation models. Analysis of sparsity patterns further reveals physically interpretable signatures, such as enhanced $d$-orbital contributions in transition-metal systems. Overall, our results establish sparsity-promoting fine-tuning of equivariant models as a flexible and interpretable method for domain specialization of materials foundation models.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 2948
Loading