Keywords: Knowledge distillation, Machine learning interatomic potentials
TL;DR: Universal machine learning potential distillation data efficiency can be further improved by student pre-training
Abstract: Machine learning interatomic potentials (MLIPs) bridge the gap between the accuracy of quantum mechanics and the efficiency of classical simulations. Although universal MLIPs (u-MLIPs) offer broad transferability across diverse chemical spaces, their high inference costs limit their scalability in large-scale simulations. In this paper, we propose LightPFP, a knowledge distillation framework designed to train computationally efficient task-specific MLIPs (ts-MLIPs) tailored for specific systems by leveraging of u-MLIPs. Unlike prior approaches that only pre-trains u-MLIPs on large datasets, LightPFP incorporates an additional step where student models are pre-trained as well.
This dual pre-training strategy significantly enhances the data efficiency of the student models, enabling them to achieve higher performance with limited training data. We validate the effectiveness of LightPFP using \ce{Ni3Al} Alloy simulation, showcasing its data efficiency, and further compare its performance against other methods in estimating the mechanical and grain boundary properties of AlCoCrFeNi high-entropy alloy.
Submission Track: Paper Track (Short Paper)
Submission Category: AI-Guided Design
Institution Location: {Tokyo, Japan}
Submission Number: 41
Loading