Efficient Wasserstein Natural Gradients for Reinforcement LearningDownload PDF

28 Sept 2020, 15:50 (modified: 10 Feb 2022, 11:48)ICLR 2021 PosterReaders: Everyone
Keywords: reinforcement learning, optimization
Abstract: A novel optimization approach is proposed for application to policy gradient methods and evolution strategies for reinforcement learning (RL). The procedure uses a computationally efficient \emph{Wasserstein natural gradient} (WNG) descent that takes advantage of the geometry induced by a Wasserstein penalty to speed optimization. This method follows the recent theme in RL of including divergence penalties in the objective to establish trust regions. Experiments on challenging tasks demonstrate improvements in both computational cost and performance over advanced baselines.
One-sentence Summary: We develop novel, efficient estimators for the Wasserstein natural gradient applied to reinforcement learning that improve the efficiency and performance of advanced baselines.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) tedmoskovitz/WNPG](https://github.com/tedmoskovitz/WNPG)
17 Replies