Track: Full track
Keywords: automatic curriculum learning, autotelic agents, diversity-seeking agents, intrinsically-motivated goal-exploration processes, intrinsically-motivated reinforcement learning, learning progress, open-ended learning, skill discovery
Abstract: Non-uniform goal selection has the potential to improve the reinforcement learning (RL) of skills over uniform-random selection. In this paper, we introduce a method for learning a goal-selection policy in intrinsically-motivated goal-conditioned RL: "Diversity Progress" (DP). The learner forms a curriculum based on observed improvement in discriminability over its set of goals. Our proposed method is applicable to the class of discriminability-motivated agents, where the intrinsic reward is computed as a function of the agent's certainty of following the true goal being pursued. This reward can motivate the agent to learn a set of diverse skills without extrinsic rewards. We demonstrate empirically that a DP-motivated agent can learn a set of distinguishable skills faster than previous approaches, and do so without suffering from a collapse of the goal distribution---a known issue with some prior approaches. We end with plans to take this proof-of-concept forward.
Submission Number: 22
Loading