Proximal Policy Distillation

Published: 07 Jun 2025, Last Modified: 07 Jun 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce Proximal Policy Distillation (PPD), a novel policy distillation method that integrates student-driven distillation and Proximal Policy Optimization (PPO) to increase sample efficiency and to leverage the additional rewards that the student policy collects during distillation. To assess the efficacy of our method, we compare PPD with two common alternatives, student-distill and teacher-distill, over a wide range of reinforcement learning environments that include discrete actions and continuous control (ATARI, Mujoco, and Procgen). For each environment and method, we perform distillation to a set of target student neural networks that are smaller, identical (self-distillation), or larger than the teacher network. Our findings indicate that PPD improves sample efficiency and produces better student policies compared to typical policy distillation approaches. Moreover, PPD demonstrates greater robustness than alternative methods when distilling policies from imperfect demonstrations. The code for the paper is released as part of a new Python library built on top of stable-baselines3 to facilitate policy distillation: <Anonymized GitHub Repository> .
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready version, de-anonimized, highlighted changes removed. Minor change: number of training env steps added to the hyperparameters table in the Appendix (Table 3).
Assigned Action Editor: ~Dennis_J._N._J._Soemers1
Submission Number: 3042
Loading