Predicting Multiple Actions for Stochastic Continuous Control

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We introduce a new approach to estimate continuous actions using actor-critic algorithms for reinforcement learning problems. Policy gradient methods usually predict one continuous action estimate for any given state which might not be optimal as it is only a point estimate and not a full distribution. We predict $M$ actions with the policy network (actor) and then uniformly sample one action during training as well as testing at each state. This allows the agent to learn a simple stochastic policy that has an easy to compute expected return. In all experiments, this facilitates better exploration of the state space during training and converges to a better policy.
  • TL;DR: We introduce a novel reinforcement learning algorithm, that predicts multiple actions and samples from them.
  • Keywords: Reinforcement Learning, DDPG, Multiple Action Prediction

Loading