All-atom Diffusion Transformers: Unified generative modelling of molecules and materials

Published: 03 Mar 2025, Last Modified: 15 Apr 2025AI4MAT-ICLR-2025 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Submission Track: Multi-Modal Data for Materials Design - Full Paper
Submission Category: AI-Guided Design
Keywords: Diffusion, Transformers, Crystals, Molecules, Foundation models
TL;DR: Latent diffusion transformer for unified generative modelling of all 3D atomic systems; SOTA results for crystal and molecule generation through transfer learning.
Abstract: Diffusion models are the standard toolkit for generative modelling of 3D atomic systems. However, for different types of atomic systems -- such as molecules and materials -- the generative processes are usually highly specific to the target system despite the underlying physics being the same. We introduce the All-atom Diffusion Transformer (ADiT), a unified latent diffusion framework for jointly generating both periodic materials and non-periodic molecular systems using the same model: (1) An autoencoder maps a unified, all-atom representations of molecules and materials to a shared latent embedding space; and (2) A diffusion model is trained to generate new latent embeddings that the autoencoder can decode to sample new molecules or materials. Experiments on QM9 and MP20 datasets demonstrate that jointly trained ADiT generates realistic and valid molecules as well as materials, exceeding state-of-the-art results from molecule and crystal-specific models. ADiT uses standard Transformers for both the autoencoder and diffusion model, resulting in significant speedups during training and inference compared to equivariant diffusion models. Scaling ADiT up to half a billion parameters predictably improves performance, representing a step towards broadly generalizable foundation models for generative chemistry.
Submission Number: 10
Loading