Learning Representations in Reinforcement Learning: an Information Bottleneck ApproachDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: representation learning, reinforcement learning, information bottleneck
TL;DR: Derive an information bottleneck framework in reinforcement learning and some simple relevant theories and tools.
Abstract: The information bottleneck principle is an elegant and useful approach to representation learning. In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms.We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound. Then, we maximize this lower bound with the Stein variational (SV) gradient method. We incorporate this framework in the advantageous actor critic algorithm (A2C) and the proximal policy optimization algorithm (PPO). Our experimental results show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly. Finally, we study the information-bottleneck (IB) perspective in deep RL with the algorithm called mutual information neural estimation(MINE). We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process. We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound.
Code: https://github.com/AnonymousSubmittedCode/SVIB
Original Pdf: pdf
8 Replies

Loading