A discretization-free metric for assessing quality diversity algorithms

Published: 01 Jan 2022, Last Modified: 28 Jan 2025GECCO Companion 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: While Quality-Diversity algorithms attempt to produce a set of high quality solutions that are diverse throughout descriptor space, in reality decision makers are often interested in solutions with specific descriptor values. In this paper we suggest that current methods of evaluating Quality Diversity algorithm performance do not properly account for a decision maker's preference in a continuous descriptor space and suggest three approaches that attempt to capture the real-world trade-off between a solution's objective performance and distance from a desired set of target descriptors.In this paper we propose a randomised metric, a process of Monte-Carlo sampling of n target points in descriptor space and a small number of random weights that represent different tolerances for mis-specification in a solution's descriptor values. This sampling allows us to simulate the requirements of all possible combinations of target-tolerance pairs and, by taking sufficient samples, estimate average performance.We go on to formulate three simple methods for comparing average performance of algorithms; Continuous Quality Diversity score (CQD) and Hypervolume of the objective/distance Pareto front. We show that these measures are simple to implement and robust measures of performance without introducing artificial discretisation of the descriptor space.
Loading