EnvPool: A Highly Parallel Reinforcement Learning Environment Execution EngineDownload PDF

03 Jun 2022, 06:15 (modified: 12 Oct 2022, 16:10)NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: reinforcement learning, speed up
TL;DR: A Highly Parallel Reinforcement Learning Environment Execution Engine
Abstract: There has been significant progress in developing reinforcement learning (RL) training systems. Past works such as IMPALA, Apex, Seed RL, Sample Factory, and others, aim to improve the system's overall throughput. In this paper, we aim to address a common bottleneck in the RL training system, i.e., parallel environment execution, which is often the slowest part of the whole system but receives little attention. With a curated design for paralleling RL environments, we have improved the RL environment simulation speed across different hardware setups, ranging from a laptop and a modest workstation, to a high-end machine such as NVIDIA DGX-A100. On a high-end machine, EnvPool achieves one million frames per second for the environment execution on Atari environments and three million frames per second on MuJoCo environments. When running EnvPool on a laptop, the speed is 2.8x that of the Python subprocess. Moreover, great compatibility with existing RL training libraries has been demonstrated in the open-sourced community, including CleanRL, rl_games, DeepMind Acme, etc. Finally, EnvPool allows researchers to iterate their ideas at a much faster pace and has great potential to become the de facto RL environment execution engine. Example runs show that it only takes five minutes to train agents to play Atari Pong and MuJoCo Ant on a laptop. EnvPool is open-sourced at https://github.com/sail-sg/envpool.
Supplementary Material: pdf
URL: https://github.com/sail-sg/envpool
License: Apache2
Author Statement: Yes
Contribution Process Agreement: Yes
In Person Attendance: Yes
22 Replies

Loading