Keywords: Expensive optimization, many-objective optimization, Gaussian Processes.
Abstract: Many-objective optimization (MOO) simultaneously optimizes many conflicting objectives to identify the Pareto front - a set of diverse solutions that represent different optimal balances between conflicting objectives. For expensive MOO problems, due to their costly function evaluations, computationally cheap surrogates have been widely used in MOO to save evaluation budget. However, as the number of objectives increases, the cost of learning and surrogation, as well as the difficulty of maintaining solution diversity, increases rapidly. In this paper, we propose LORA-MOO, a surrogate-assisted MOO algorithm that learns surrogates from spherical coordinates. This includes an ordinal-regression-based surrogate for convergence and $M-1$ regression-based surrogates for diversity. $M$ is the number of objectives. Such a surrogate modeling method makes it possible to use a single ordinal surrogate to do the surrogate-assisted search, and the remaining surrogates are used to select solution for expensive evaluations, which enhances the optimization efficiency. The ordinal regression surrogate is developed to predict ordinal relation values as radial coordinates, estimating how desirable the candidate solutions are in terms of convergence. The solution diversity is maintained via angles between solutions, which is a parameter-free. Experimental results show that LORA-MOO significantly outperforms other surrogate-assisted MOO methods on most MOO benchmark problems and real-world applications.
Primary Area: Optimization (convex and non-convex, discrete, stochastic, robust)
Submission Number: 20755
Loading