Guided Exploration in Reinforcement Learning via Monte Carlo Critic OptimizationDownload PDF

28 May 2022, 15:35 (modified: 25 Jun 2022, 15:08)DARL 2022Readers: Everyone
Keywords: reinforcement learning, exploration, Monte Carlo methods
TL;DR: We address the problem of inefficient exploration in deterministic continuous control algorithms. We propose a novel method for directed exploration based on uncertainty minimization via an ensemble of Monte Carlo critics.
Abstract: The class of deep deterministic off-policy algorithms is effectively applied to solve challenging continuous control problems. However, current approaches use random noise as a common exploration method that has several weaknesses, such as a need for manual adjusting on a given task and the absence of exploratory calibration during the training process. We address these challenges by proposing a novel guided exploration method that uses a differential directional controller to incorporate scalable exploratory action correction. An ensemble of Monte Carlo Critics that provides exploratory direction is presented as a controller. The proposed method improves the traditional exploration scheme by changing exploration dynamically. We then present a novel algorithm exploiting the proposed directional controller for both policy and critic modification. The presented algorithm outperforms modern algorithms across a variety of problems from DMControl suite.
0 Replies