Keywords: Reinforcement learning, Actor critic, Deep RL
TL;DR: We conduct a broad empirical analysis of asymmetric actor-critic methods and find mechanisms whereby the critic can help mitigate performance degradation from smaller actors.
Abstract: Actor-critic methods have been central to many of the recent advances in deep reinforcement learning. The most common approach is to use _symmetric_ architectures, whereby both actor and critic have the same network topology and number of parameters. However, recent works have argued for the advantages of _asymmetric_ setups, specifically with the use of smaller actors. We perform broad empirical investigations and analyses to better understand the implications of this and find that, in general, smaller actors result in performance degradation and overfit critics. Our analyses suggest _poor data collection_, due to value underestimation, as one of the main causes for this behavior, and further highlight the crucial role the critic can play in alleviating this pathology. We explore techniques to mitigate the observed value underestimation, which enables further research in asymmetric actor-critic methods.
Submission Number: 283
Loading