[AML] Unified Material Transformer as Scalable Material Property Predictor

THU 2024 Winter AML Submission22 Authors

11 Dec 2024 (modified: 02 Mar 2025)THU 2024 Winter AML SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Material Property Prediction, Pretraining, Interpretability Analysis
Abstract: Predicting material properties based on structural information is a critical task in materials science, where density functional theory (DFT)-based simulations remain the gold standard. However, DFT computations are notoriously expensive, motivating the development of deep learning methods such as graph neural networks (GNNs) to accelerate and improve property predictions. Although GNNs have demonstrated promising results, limitations still remain in capturing long-range global interactions and lacking clear evidence of scalability. In this paper, we propose a transformer-based unified framework for material property prediction. The framework introduces a novel tokenizer coupled with a 3D positional encoding scheme to effectively capture spatial information. Both BERT-style and GPT-style pretraining strategies are utilized to learn robust and generalized representations of material structures. The model therefore achieves performance on par with or better than multiple specialized downstream models, while maintaining a single, consistent network architecture. Furthermore, through interpretability analysis of the learned embeddings, we also discovered that the element embeddings are in high accordance with the well known principles in chemistry and the embedding vectors exhibit a meaningful pattern, suggesting their potential to represent the intrinsic properties of elements. This indicates that our pretrained transformer model captures and organizes intrinsic chemical and structural knowledge, offering a new avenue for scalable and interpretable material property prediction.
Submission Number: 22
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview