Abstract: Equivariant diffusion models have achieved impressive performance in 3D molecule generation. These models incorporate Euclidean symmetries of 3D molecules by utilizing an SE(3)-equivariant denoising network. However, specialized equivariant architectures limit the scalability and efficiency of diffusion models. In this paper, we propose an approach that relaxes such equivariance constraints. Specifically, our approach learns a sample-dependent SO(3) transformation for each molecule to construct an aligned latent space. A non-equivariant diffusion model is then trained over the aligned representations. Experimental results demonstrate that our approach performs significantly better than previously reported non-equivariant models. It yields sample quality comparable to state-of-the-art equivariant diffusion models and offers improved training and sampling efficiency. Our code is available at: https://github.com/skeletondyh/RADM
Lay Summary: In this work, we investigate whether strict symmetry constraints on neural networks are necessary for generative models to deal with 3D molecules. We find that such constraints are not necessary. We propose a method that learns to rotate molecules and arrange their shared structures in similar orientations, enabling generative models to recognize them more easily.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Non-equivariant diffusion, 3D molecule generation
Submission Number: 16184
Loading