Abstract: Optimising multi-objective problems using evolutionary algorithms often results in many trade-off solutions due to conflicting objectives. It is then a daunting task for the end user to select a solution for implementation. Progressive elicitation of preferences during optimisation helps ameliorate this problem by directing the search toward regions of interest and away from undesirable solutions. We propose an approach called accelerated Bayesian preference learning (ABPL), which substantially reduces the number of queries needed to find preferred solutions and minimises expensive algorithm evaluations. We identify promising solutions exhibiting similar preference characteristics from previous data and warm start the preference model. We propose to use the newly acquired information during the optimisation to find additional solutions and present them to the user in conjunction with suggestions from BO.
Loading