Diversity Policy Gradient for Sample Efficient Quality-Diversity OptimizationDownload PDF

21 May 2021 (modified: 05 May 2023)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: Deep Learning, Quality Diversity, Evolutionary Computing, Optimization, Exploration
TL;DR: This paper introduces a novel alogrithm, QD-PG, that produces collection of both high-performing and diverse solutions to a given probem. We show that QD-PG allows to tackle hard exploration problems while improving robustness in solutions found.
Abstract: A fascinating aspect of nature lies in its ability to produce a large and diverse collection of organisms that are all high-performing in their niche. By contrast, most AI algorithms focus on finding a single efficient solution to a given problem. Aiming for diversity in addition to performance is a convenient way to deal with the exploration-exploitation trade-off that plays a central role in learning. It also allows for increased robustness when the returned collection contains several working solutions to the considered problem, making it well-suited for real applications such as robotics. Quality-Diversity (QD) methods are evolutionary algorithms designed for this purpose. This paper proposes a novel algorithm, QD-PG, which combines the strength of Policy Gradient algorithms and Quality Diversity approaches to produce a collection of diverse and high-performing neural policies in continuous control environments. The main contribution of this work is the introduction of a Diversity Policy Gradient (DPG) that exploits information at the time-step level to thrive policies towards more diversity in a sample efficient manner. Specifically, QD-PG selects neural controllers from a MAP-Elites grid and uses two gradient-based mutation operators to improve both quality and diversity, resulting in stable population updates. Our results demonstrate that QD-PG produces collections of diverse solutions that solve challenging exploration and control problems while being two orders of magnitude more sample efficient than its evolutionary competitors.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: zip
8 Replies

Loading