ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks

Published: 17 Jun 2024, Last Modified: 17 Jul 2024ICML2024-AI4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian Optimization, Latent Space Optimization, VAE, Drug discovery
TL;DR: We introduce a new constraint for latent space optimization that mitigates over-exploration in the latent space, resulting in higher quality solutions.
Abstract: We develop Scalable Latent Exploration Score (ScaLES) to mitigate over-exploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space of a Variational Autoencoder (VAE) and is known to be susceptible to over-exploration, which manifests in unrealistic solutions that reduce its practicality. ScaLES is an exact and theoretically motivated method leveraging the trained decoder's approximation of the data distribution. ScaLES can be calculated with any existing decoder, e.g. from a VAE, without additional training, architectural changes, or access to the training data. Our evaluation across five LSO benchmark tasks and three VAE architectures demonstrates that ScaLES enhances the quality of the solutions while maintaining high objective values, leading to improvements over existing solutions. We believe that new avenues to LSO will be opened by ScaLES’ ability to identify out of distribution areas, differentiability, and computational tractability. To help the reviewers assess ScaLES, we include an anonymous Colab replicating some results.
Submission Number: 112
Loading