Trust-PCL: An Off-Policy Trust Region Method for Continuous ControlDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO.
TL;DR: We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control.
Keywords: Reinforcement learning
Code: [![github](/images/github_icon.svg) tensorflow/models](https://github.com/tensorflow/models)
Data: [OpenAI Gym](https://paperswithcode.com/dataset/openai-gym)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1707.01891/code)
9 Replies

Loading