Track: Type A (Regular Papers)
Keywords: World models, Generalization, Reinforcement learning
Abstract: $\textit{World Models}$ in Reinforcement Learning agents support the construction of an internal representation of the external environment by compressing sensory experiences into a form suitable for reasoning, planning, and guiding behavior. They are inspired by the hippocampal formation in the limbic system of the mammalian brain, which facilitates spatial navigation, abstract problem-solving, generalization of knowledge, and transfer of learned skills across a wide range of contexts.
$%$
In this paper, we consider two different world model architectures in the reinforcement learning setting: one using a $\textit{stochastic transformer}$, and one using the hippocampus-inspired $\textit{TEM transformer}$. We investigate the extent to which agents equipped with such world models can be effectively trained across a small set of diverse environments, and how well they transfer and generalize between them.
Our experiments demonstrate early but promising signs that multi-environment agents can not only solve multiple tasks with shared parameters but also address the spatial invariance problem in a highly sample-efficient manner.
Serve As Reviewer: ~Dennis_J._N._J._Soemers1
Submission Number: 80
Loading