Multi-objective evolution for Generalizable Policy Gradient AlgorithmsDownload PDF

Published: 27 Apr 2022, Last Modified: 22 Oct 2023ICLR 2022 GPL PosterReaders: Everyone
Keywords: Multi-objective, evolution, Generalization, Policy Gradient, Stability
TL;DR: We present a method to evolve Reinforcement Learning algorithms that satisfy multiple RL objectives at the same time (performance, generalizability, and stability) and observe we are able to find algorithms that are better than SAC in all of them
Abstract: Performance, generalizability, and stability are three Reinforcement Learning (RL) challenges relevant to many practical applications in which they present themselves in combination. Still, state-of-the-art RL algorithms fall short when addressing multiple RL objectives simultaneously and current human-driven design practices might not be well-suited for multi-objective RL. In this paper we present MetaPG, an evolutionary method that discovers new RL algorithms represented as graphs, following a multi-objective search criteria in which different RL objectives are encoded in separate fitness scores. Our findings show that, when using a graph-based implementation of Soft Actor-Critic (SAC) to initialize the population, our method is able to find new algorithms that improve upon SAC's performance and generalizability by 3% and 17%, respectively, and reduce instability up to 65%. In addition, we analyze the graph structure of the best algorithms in the population and offer an interpretation of specific elements that help trading performance for generalizability and vice versa. We validate our findings in three different continuous control tasks: RWRL Cartpole, RWRL Walker, and Gym Pendulum.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2204.04292/code)
1 Reply

Loading