Addressing Sample Complexity in Visual Tasks Using Hindsight Experience Replay and Hallucinatory GANs
Keywords: Sample Complexity, Hindsight Experience Replay, Reinforcement Learning
Abstract: Reinforcement Learning (RL) algorithms typically require millions of environment interactions to learn successful policies in sparse reward settings. Hindsight Experience Replay (HER) was introduced as a technique to increase sample efficiency by re-imagining unsuccessful trajectories as successful ones by changing the originally intended goals. However, HER cannot be directly applied to visual environments where goal states are characterized by the presence of distinct visual features. In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal.
We then use this model in combination with HER to train RL agents in visual settings. We validate our approach on 3D navigation tasks and a simulated robotics application and show marked improvement over standard RL algorithms and baselines derived from previous work.
3 Replies
Loading