Keywords: Reinforcement learning, representation learning, slow feature analysis
TL;DR: Combining slow feature analysis with deep reinforcement learning lets agents recover location and heading from visual streams and can improve navigation, compared to representations learned with convolutional neural networks.
Abstract: Visual navigation requires a wide range of capabilities in an agent. A crucial one is the ability to determine the agent's own location and heading in an environment. However, existing navigation approaches either assume this information is given, or use methods that lack a suitable inductive bias and accumulate error over time. Inspired by neuroscience research, the method of slow feature analysis (SFA) overcomes these limitations and extracts agent location and heading from a visual data stream, but has not been combined with modern, deep reinforcement learning agents. In this paper, we compare SFA representations with those learned by convolutional neural networks in deep RL agents. We also demonstrate how using SFA representations can improve navigation performance. Lastly, we empirically and conceptually investigate the limitations of using SFA and discuss how they currently prevent it from being used more widely for visual navigation in RL.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19498
Loading