Automatic Goal Generation using Dynamical Distance LearningDownload PDF

Anonymous

09 Mar 2021 (modified: 05 May 2023)Submitted to SSL-RL 2021Readers: Everyone
Keywords: Reinforcement Learning, Curriculum learning, goal conditioned RL, self supervision
TL;DR: A method for goal generation using dynamical distance functions thus automatically producing a curriculum.
Abstract: Reinforcement Learning (RL) agents can learn to solve complex sequential decision making tasks by interacting with the environment. However, sample efficiency remains a major challenge. In the field of multi-goal RL, where agents are required to reach multiple goals to solve complex tasks, improving sample efficiency can be especially challenging. On the other hand, humans or other biological agents learn such tasks in a much more strategic way, following a curriculum where tasks are sampled with increasing difficulty level in order to make gradual and efficient learning progress. In this work, we propose a method for automatic goal generation using a dynamical distance function (DDF) in a self-supervised fashion. DDF is a function which predicts the dynamical distance between any two states within a markov decision process (MDP). With this, we generate a curriculum of goals at the appropriate difficulty level to facilitate efficient learning throughout the training process. We evaluate this approach on several goal-conditioned robotic manipulation and navigation tasks, and show improvements in sample efficiency over a baseline method which only uses random goal sampling.
0 Replies

Loading