Which Mutual-Information Representation Learning Objectives are Sufficient for Control?Download PDF

May 21, 2021 (edited Jan 24, 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: reinforcement learning, representation learning, mutual information
  • TL;DR: We theoretically analyze whether popular MI-based representation learning objectives for RL yield state representations sufficient for learning and representing optimal control policies, and illustrate our findings with deep RL experiments.
  • Abstract: Mutual information (MI) maximization provides an appealing formalism for learning representations of data. In the context of reinforcement learning (RL), such representations can accelerate learning by discarding irrelevant and redundant information, while retaining the information necessary for control. Much prior work on these methods has addressed the practical difficulties of estimating MI from samples of high-dimensional observations, while comparatively less is understood about which MI objectives yield representations that are sufficient for RL from a theoretical perspective. In this paper, we formalize the sufficiency of a state representation for learning and representing the optimal policy, and study several popular MI based objectives through this lens. Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP. We corroborate our theoretical results with empirical experiments on a simulated game environment with visual observations.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
12 Replies

Loading