Abstract: We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important.
TL;DR: Modifications to MAML and RL2 that should allow for better exploration.
Keywords: reinforcement learning, rl, exploration, meta learning, meta reinforcement learning, curiosity
Code: [ episodeyang/e-maml](https://github.com/episodeyang/e-maml) + [ 6 community implementations](https://paperswithcode.com/paper/?openreview=Skk3Jm96W)
Community Implementations: [ 8 code implementations](https://www.catalyzex.com/paper/some-considerations-on-learning-to-explore/code)
12 Replies
Loading