The 37 Implementation Details of Proximal Policy Optimization

Anonymous

Published: 28 Mar 2022, Last Modified: 05 May 2023BT@ICLR2022Readers: Everyone
Keywords: proximal-policy-optimization, reproducibility, reinforcement-learning, implementation-details, code-level-optimizations, tutorial
Abstract: Proximal policy optimization (PPO) has become one of the most popular deep reinforcement learning (DRL) algorithms. Yet, reproducing the PPO's results has been challenging in the community. While recent works conducted ablation studies to provide insight on PPO's implementation details, these works are not structured as tutorials and only focus on details concerning robotics tasks. As a result, reproducing PPO from scratch can become a daunting experience. Instead of introducing additional improvements, or doing further ablation studies, this blog post takes a step back and focuses on delivering a thorough reproduction of PPO in all accounts, as well as aggregating, documenting, and cataloging its most salient implementation details. This blog post also points out software engineering challenges in PPO and further efficiency improvement via the accelerated vectorized environments. With these, we believe this blog post will help people understand PPO faster and better, facilitating customization and research upon this versatile RL algorithm.
Submission Full: zip
Blogpost Url: yml
ICLR Paper: https://openreview.net/forum?id=r1etN1rtPB, https://openreview.net/forum?id=nIAxjsniDzg
2 Replies

Loading