Regularization Matters in Policy Optimization - An Empirical Study on Continuous ControlDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 SpotlightReaders: Everyone
Keywords: Policy Optimization, Regularization, Continuous Control, Deep Reinforcement Learning
Abstract: Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., $L_2$ regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment, and because the deep RL community focuses more on high-level algorithm designs. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement, especially on harder tasks. Our findings are shown to be robust against training hyperparameter variations. We also compare these techniques with the more widely used entropy regularization. In addition, we study regularizing different components and find that only regularizing the policy network is typically the best. We further analyze why regularization may help generalization in RL from four perspectives - sample complexity, reward distribution, weight norm, and noise robustness. We hope our study provides guidance for future practices in regularizing policy optimization algorithms. Our code is available at https://github.com/xuanlinli17/iclr2021_rlreg .
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We show that conventional regularization methods (e.g., $L_2$), which have been largely ignored in RL methods, can be very effective in policy optimization on continuous control tasks; we also analyze why they can help from several perspectives.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) anonymouscode114/iclr2021_rlreg](https://github.com/anonymouscode114/iclr2021_rlreg)
Data: [MuJoCo](https://paperswithcode.com/dataset/mujoco)
21 Replies

Loading