OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments

Published: 07 Jun 2024, Last Modified: 07 Jun 2024InterpPol @RLC-2024 CorrectpaperthatfitsthetopicEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Object-centric RL, Neurosymbolic RL, Causal RL, Atari Learning Environments
TL;DR: OCAtari provides object-centric states from the Atari Learning Environments in a resource efficient way.
Abstract: Cognitive science and psychology suggest that object-centric representations of complex scenes are a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep reinforcement learning approaches only rely on pixel-based representations that do not capture the compositional properties of natural scenes. For this, we need environments and datasets that allow us to work and evaluate object-centric approaches. In our work, we extend the Atari Learning Environments, the most-used evaluation framework for deep RL approaches, by introducing OCAtari, that performs resource-efficient extractions of the object-centric states for these games. Our framework allows for object discovery, object representation learning, as well as object-centric RL. We evaluate OCAtari's detection capabilities and resource efficiency. Our source code is available at github.com/k4ntz/OC_Atari.
Submission Number: 16
Loading