Learning Reusable Options for Multi-Task Reinforcement LearningDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Abstract: Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.
Code: https://anonymousfiles.io/Ls3zuIqn/
Keywords: Reinforcement Learning, Temporal Abstraction, Options, Multi-Task RL
TL;DR: We discover options for multi-task RL by maximizing the probability of reproducing optimal trajectories while minimizing the number of decisions needed to do so.
Original Pdf: pdf
10 Replies

Loading