Diversity Policy Gradient for Sample Efficient Quality-Diversity OptimizationDownload PDF

Published: 23 Apr 2022, Last Modified: 05 May 2023ALOE@ICLR2022Readers: Everyone
Keywords: reinforcement-learning, quality-diversity, open-endedness, continuous-control, evolutionary, optimization, map-elites
TL;DR: We present QD-PG, a quality-diversity algorithm that introduces a Diversity Policy Gradient to enhance diversity among discovered solutions.
Abstract: A fascinating aspect of nature lies in its ability to produce a large and diverse collection of high-performing organisms in an open-ended way. By contrast, most AI algorithms seek convergence and focus on finding a single efficient solution to a given problem. Aiming for diversity through divergent search in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose.This paper proposes a novel algorithm, QD-PG, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that drives policies towards more diversity in a sample-efficient and open-ended manner. Specifically, QD-PG selects neural controllers from a MAP-ELITES grid and uses two gradient-based mutation operators to improve both quality and diversity. Our results demonstrate that QD-PG is significantly more sample-efficient than its evolutionary competitors.
1 Reply

Loading