Learning Exploration Policies for Model-Agnostic Meta-Reinforcement LearningDownload PDF

Anonymous

16 May 2019 (modified: 05 May 2023)AMTL 2019Readers: Everyone
Keywords: MAML, meta-learning, exploration, self-supervised learning
Abstract: Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.
TL;DR: We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance.
0 Replies

Loading