Representing Repeated Structure in Reinforcement Learning Using Symmetric MotifsDownload PDF

Published: 07 Nov 2022, Last Modified: 05 May 2023NeurReps 2022 PosterReaders: Everyone
Keywords: successor representation, reinforcement learning, symmetry
TL;DR: The successor representation can be compressed by finding local symmetries
Abstract: Transition structures in reinforcement learning can contain repeated motifs and redun- dancies. In this preliminary work, we suggest using the geometric decomposition of the adjacency matrix to form a mapping into an abstract state space. Using the Successor Representation (SR) framework, we decouple symmetries in the translation structure from the reward structure, and form a natural structural hierarchy by using separate SRs for the global and local structures of a given task. We demonstrate that there is low error when performing policy evaluation using this method and that the resulting representations can be significantly compressed.
4 Replies

Loading