Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual EmbeddingsDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023nCSI WS @ NeurIPS 2022 PosterReaders: Everyone
Keywords: Deep Reinforcement Learning, Transfer Learning, Model-Based Reinforcement Learning
TL;DR: We represent different contexts of the same environments as state-machine abstractions called reward machines which augment the state space of a deep reinforcement learning agent.
Abstract: Deep reinforcement learning (DRL) algorithms have seen great success in perform- ing a plethora of tasks, but often have trouble adapting to changes in the environ- ment. We address this issue by using reward machines (RM), a graph-based ab- straction of the underlying task to represent the current setting or context. Using a graph neural network (GNN), we embed the RMs into deep latent vector represen- tations and provide it to the agent to enhance its ability to adapt to new contexts. To the best of our knowledge, this is the first work to embed contextual abstractions and let the agent decide how to use them. Our preliminary empirical evaluation demonstrates improved sample efficiency of our approach upon context transfer on a set of grid navigation tasks.
4 Replies