Combining Graph Attention and Recurrent Neural Networks in a Variational Autoencoder for Molecular Representation Learning and Drug Design

Published: 17 Jun 2024, Last Modified: 16 Jul 2024ML4LMS PosterEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: Molecular Representation Learning, Graph Neural Networks, LSTM, Variational Autoencoder
TL;DR: We explore the fusion of GNNs with RNNs within a VAE framework for molecular representation learning, demonstrating competitive performance in QSAR benchmarks, high validity & drug-likeness of sampled molecules, and robust latent space interpolation.
Abstract: Finding a meaningful molecular representation that can be leveraged for a variety of tasks in chemical sciences and drug discovery is of wide interest, and new representation learning techniques are continuously being explored. Here, we investigate the fusion of graph attention neural networks with recurrent neural networks within a variational autoencoder framework for molecular representation learning. This combination leverages the strengths of both architectures to capture properties of molecular structures, enabling more effective encoding and flexible decoding processes. With the resulting representation, we observe competitive performance in quantitative structure-activity relationship (QSAR) benchmarks, a high validity and drug-likeness of randomly sampled molecules and robustness for linear latent space interpolation between two molecules. Our approach holds promise for facilitating downstream tasks such as clustering, QSAR, virtual screening and generative molecular design, all unified in one molecular representation.
Supplementary Material: zip
Poster: pdf
Submission Number: 45
Loading