Abstract: We introduce Proximal Policy Distillation (PPD), a novel policy distillation method that integrates student-driven distillation and Proximal Policy Optimization (PPO) to increase sample efficiency and to leverage the additional rewards that the student policy collects during distillation. To assess the efficacy of our method, we compare PPD with two common alternatives, student-distill and teacher-distill, over a wide range of reinforcement learning environments that include discrete actions and continuous control (ATARI, Mujoco, and Procgen). For each environment and method, we perform distillation to a set of target student neural networks that are smaller, identical (self-distillation), or larger than the teacher network. Our findings indicate that PPD improves sample efficiency and produces better student policies compared to typical policy distillation approaches. Moreover, PPD demonstrates greater robustness than alternative methods when distilling policies from imperfect demonstrations. The code for the paper is released as part of a new Python library built on top of stable-baselines3 to facilitate policy distillation: <Anonymized GitHub Repository> .
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: The main modifications are:
- Integration of new literature and its comparison to PPD;
- Inclusion of PPO baseline;
- Improved statistical analysis of the results (most tables and figures);
- Minor additional improvements throughout the text.
The main changes are highlighted in red.
Assigned Action Editor: ~Dennis_J._N._J._Soemers1
Submission Number: 3042
Loading