Learning One Representation to Optimize All RewardsDownload PDF

Mar 09, 2021 (edited Apr 26, 2021)ICLR 2021 Workshop SSL-RL Blind SubmissionReaders: Everyone
  • Keywords: reward-free MDP, successor states, self-supervision
  • TL;DR: We introduce a learnable "summary" of a reward-free MDP, from which near-optimal policies can be obtained for any reward function specified a posteriori, instantaneously without planning.
  • Abstract: We introduce the forward-backward (FB) representation of the dynamics of a reward-free Markov decision process. It provides explicit near-optimal policies for any reward specified a posteriori. During an unsupervised phase, we use reward-free interactions with the environment to learn two representations via off-the-shelf deep learning methods and temporal difference (TD) learning. In the test phase, a reward representation is estimated either from observations or an explicit reward description (e.g., a target state). The optimal policy for that reward is directly obtained from these representations, with no planning. The unsupervised FB loss is well-principled: if training is perfect, the policies obtained are provably optimal for any reward function. With imperfect training, the sub-optimality is proportional to the unsupervised approximation error. The FB representation learns long-range relationships between states and actions, via a predictive occupancy map, without having to synthesize states as in model-based approaches. This is a step towards learning controllable agents in arbitrary black-box stochastic environments. This approach compares well to goal-oriented RL algorithms on discrete and continuous mazes, pixel-based Ms. Pacman, and the FetchReach virtual robot arm. We also illustrate how the agent can immediately adapt to new tasks beyond goal-oriented RL.
0 Replies

Loading