Loss is its own Reward: Self-Supervision for Reinforcement LearningDownload PDF

23 Apr 2024 (modified: 19 Feb 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a noisy and impoverished signal for end-to-end optimization. To augment reward, we consider self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. Self-supervised pre-training improves the data efficiency and returns of end-to-end reinforcement learning on Atari.
TL;DR: Pre-training with auxiliary losses improves the data efficiency of policy optimization on Atari.
Keywords: Deep learning, Unsupervised Learning, Reinforcement Learning
Conflicts: cs.berkeley.edu, openai.com
5 Replies

Loading