Generalized Proximal Policy Optimization with Sample ReuseDownload PDF

21 May 2021, 20:45 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: reinforcement learning, policy optimization
  • TL;DR: We develop policy improvement guarantees for the off-policy setting, which we use to motivate an off-policy version of Proximal Policy Optimization with principled sample reuse.
  • Abstract: In real-world decision making tasks, it is critical for data-driven reinforcement learning methods to be both stable and sample efficient. On-policy methods typically generate reliable policy improvement throughout training, while off-policy methods make more efficient use of data through sample reuse. In this work, we combine the theoretically supported stability benefits of on-policy algorithms with the sample efficiency of off-policy algorithms. We develop policy improvement guarantees that are suitable for the off-policy setting, and connect these bounds to the clipping mechanism used in Proximal Policy Optimization. This motivates an off-policy version of the popular algorithm that we call Generalized Proximal Policy Optimization with Sample Reuse. We demonstrate both theoretically and empirically that our algorithm delivers improved performance by effectively balancing the competing goals of stability and sample efficiency.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/jqueeney/geppo
17 Replies

Loading