Disentangled Cumulants Help Successor Representations Transfer to New TasksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We show in the absence of external reward that agents can leverage knowledge from unsupervised control of latent features to solve downstream tasks and that when these latent features are disentangled superior performance is achieved.
Abstract: Biological intelligence can learn to solve many diverse tasks in a data efficient manner by re-using basic knowledge and skills from one task to another. Furthermore, many of such skills are acquired through something called latent learning, where no explicit supervision for skill acquisition is provided. This is in contrast to the state-of-the-art reinforcement learning agents, which typically start learning each new task from scratch and struggle with knowledge transfer. In this paper we propose a principled way to learn and recombine a basis set of policies, which comes with certain guarantees on the coverage of the final task space. In particular, we construct a learning pipeline where an agent invests time to learn to perform intrinsically generated, goal-based tasks, and subsequently leverages this experience to quickly achieve a high level of performance on externally specified, often significantly more complex tasks through generalised policy improvement. We demonstrate both theoretically and empirically that such goal-based intrinsic tasks produce more transferable policies when the goals are specified in a space that exhibits a form of disentanglement.
Keywords: reinforcement learning, representation learning, intrinsic reward, intrinsic control, endogenous, generalized policy improvement, successor features, variational, monet, disentangled
Original Pdf: pdf
4 Replies

Loading