Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model EnsemblesDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Model-Based Reinforcement Learning, Deep Exploration, Continuous Visual Control, UCB, Latent Space, Ensembling
Abstract: Learning complex behaviors through interaction requires coordinated long-term planning. Random exploration and novelty search lack task-centric guidance and waste effort on non-informative interactions. Instead, decision making should target samples with the potential to optimize performance far into the future, while only reducing uncertainty where conducive to this objective. This paper presents latent optimistic value exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards. We combine finite-horizon rollouts from a latent model with value function estimates to predict infinite-horizon returns and recover associated uncertainty through ensembling. Policy training then proceeds on an upper confidence bound (UCB) objective to identify and select the interactions most promising to improve long-term performance. We apply LOVE to continuous visual control tasks and demonstrate improved sample complexity on a selection of benchmarking tasks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: Our RL algorithm for visual control in continuous state-action spaces enables deep exploration by training its policy on a UCB objective over predicted infinite-horizon returns, derived via latent model ensembling and value function estimation.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=uitWP-s0B
15 Replies

Loading