An Epsilon-Frontier for Faster Optimization in Nonlinear Manifold LearningDownload PDF

Published: 23 May 2023, Last Modified: 23 May 2023AAAI 2022 Workshop ADAMReaders: Everyone
Keywords: epsilon-frontier, manifold learning, Bayesian optimization, optimization trajectory, autoencoder, convergence rate, latent space
TL;DR: We explore the performance of linear and nonlinear models in using prior optimization data to create a smooth low-dimensional manifold along which new optimal points may lie.
Abstract: Complex engineering problems such as compressor blade optimization often require large amounts of data and computational resources to produce optimal designs because traditional approaches only operate in the original high-dimensional design space. To mitigate this issue, we develop a simple yet effective autoencoder architecture that operates on a prior $\epsilon$-frontier from examples of past optimization trajectories. This paper focuses on using such non-linear methods to maximize dimensionality reduction on an easily verifiable synthetic dataset, providing a faster alternative to high-fidelity simulation techniques. We test a variety of component reduction models on the $\epsilon$-frontier of a synthetic 2-dimensional dataset of K trajectories, for which we can easily verify the accuracy of alterations to the latent space. We find that our autoencoder generally converges more quickly than other simple architectures such as PCA in the resulting 1-dimensional space.
2 Replies