Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks

Published: 08 Feb 2024, Last Modified: 08 Feb 2024XAI4DRLEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I accept the constraint that If the paper will be accepted at least one of the authors will attend the workshop and present the work
Keywords: visual navigation, slow feature analysis, reinforcement learning
TL;DR: This paper shows how slow feature analysis can help reinforcement learning in visual navigation tasks by extracting agent location and heading from visual observations.
Abstract: Visual navigation requires a whole range of capabilities. A crucial one of these is the ability of an agent to determine its own location and heading in an environment. Prior works commonly assume this information as given, or use methods which lack a suitable inductive bias and accumulate error over time. In this work, we show how the method of slow feature analysis (SFA), inspired by neuroscience research, overcomes both limitations by generating interpretable representations of visual data that encode location and heading of an agent. We employ SFA in a modern reinforcement learning context, analyse and compare representations and illustrate where hierarchical SFA can outperform other feature extractors on navigation tasks.
Submission Type: Long Paper
Submission Number: 6
Loading