Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Loss is its own Reward: Self-Supervision for Reinforcement Learning
Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell
Feb 17, 2017 (modified: Feb 19, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a noisy and impoverished signal for end-to-end optimization. To augment reward, we consider self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. Self-supervised pre-training improves the data efficiency and returns of end-to-end reinforcement learning on Atari.
TL;DR:Pre-training with auxiliary losses improves the data efficiency of policy optimization on Atari.