Representation Gap in Deep Reinforcement LearningDownload PDF

28 May 2022 (modified: 22 Oct 2023)DARL 2022Readers: Everyone
Keywords: deep reinforcement learning, representation learning, policy evaluation
TL;DR: We propose a deep reinforcement learning framework POPRO to activate the representation gap and thus increase the representation capability; experiments show that it can outperform or match the state-based RL algorithm when receiving pixel inputs.
Abstract: Deep reinforcement learning gives the promise that an agent learns good policy from high-dimensional information. Whereas representation learning removes irrelevant and redundant information and retains pertinent information. We consider the representation capacity of action value function and theoretically reveal its inherent property, representation gap with its target action value function. This representation gap is favorable. However, through illustrative experiments, we show that the representation of action value function grows similarly compared with its target value function, i.e. the undesirable inactivity of the representation gap (representation overlap). Representation overlap results in a loss of representation capacity, which further leads to sub-optimal learning performance. To activate the representation gap, we propose a simple but effective framework Policy Optimization from Preventing Representation Overlaps (POPRO), which regularizes the policy evaluation phase through differing the representation of action value function from its target. We also provide the convergence rate guarantee of POPRO. We evaluate POPRO on gym continuous control suites. The empirical results show that POPRO using pixel inputs outperforms or parallels the sample-efficiency of methods that use state-based features.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2205.14557/code)
0 Replies

Loading