Some Considerations on Learning to Explore via Meta-Reinforcement LearningDownload PDF

15 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and ERL2. Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and ERL2 deliver better performance on tasks where exploration is important.
TL;DR: Modifications to MAML and RL2 that should allow for better exploration.
Keywords: reinforcement learning, rl, exploration, meta learning, meta reinforcement learning, curiosity
Code: [![github](/images/github_icon.svg) episodeyang/e-maml](https://github.com/episodeyang/e-maml) + [![Papers with Code](/images/pwc_icon.svg) 6 community implementations](https://paperswithcode.com/paper/?openreview=Skk3Jm96W)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 7 code implementations](https://www.catalyzex.com/paper/arxiv:1803.01118/code)
12 Replies

Loading