High-Content Similarity-Based Virtual Screening Using a Distance Aware Transformer ModelDownload PDF

Published: 05 Apr 2022, Last Modified: 05 May 2023MLDD PosterReaders: Everyone
Keywords: Virtual Screening, Similarity Search, Deep Learning, Transformer Model, Latent Space Sampling
Abstract: Molecular similarity search is an often-used method in drug discovery, especially in virtual screening studies. While simple one- or two dimensional similarity metrics can be applied to search databases containing billions of molecules in a reasonable amount of time, this is not the case for complex three dimensional methods. In this work, we trained a transformer model to autoencode tokenized SMILES strings using a custom loss function developed to conserve similarities in latent space. This allows the direct sampling of molecules in the generated latent space based on their Euclidian distance. Reducing the similarity between molecules to their Euclidian distance in latent space allows the model to perform independent of the similarity metric it was trained on, thus enabling high-content screening with time-consuming 3D similarity metrics. We show that the presence of a specific loss function for similarity conservation greatly improved the model’s ability to predict highly similar molecules. When applying the model to a database containing 1.5 billion molecules, our model managed to reduce the relevant search space by 5 orders of magnitude. We also show that our model was able to generalize adequately when trained on a relatively small dataset of representative structures. The herein presented method thereby provides new means of substantially reducing the relevant search space in virtual screening approaches, thus highly increasing their throughput. Additionally, the distance awareness of the model causes the performance of this method to be independent of the underlying similarity metric.
TL;DR: We trained a transformer-based model to map molecular similarities to Euclidian distances allowing to efficiently search ultra-large databases.
0 Replies