Harnessing Structures for Value-Based Planning and Reinforcement Learning

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: Deep reinforcement learning, value-based reinforcement learning
  • TL;DR: We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning.
  • Abstract: Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games). As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on ''low-rank'' tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.
  • Original Pdf:  pdf
0 Replies

Loading