Relational Object-Centric Actor-Critic

Published: 28 Jan 2025, Last Modified: 23 Jun 2025CLeaR 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Object-centric Representations, Graph Neural Networks, Actor-critic, Model-based Reinforcement Learning
TL;DR: We propose a novel object-centric reinforcement learning algorithm combining actor-critic and model-based approaches, by incorporating an object-centric world model in critic
Abstract: There have recently been significant advances in the problem of unsupervised object-centric representation learning and its application to downstream tasks. The latest works support the argument that employing disentangled object representations in image-based object-centric reinforcement learning tasks facilitates policy learning. We propose a novel object-centric reinforcement learning algorithm combining actor-critic and model-based approaches, by incorporating an object-centric world model in critic. The proposed method fills a research gap in developing efficient object-centric world models for reinforcement learning settings that can be used for environments with discrete or continuous action spaces. We evaluated our algorithm in simulated 3D robotic environment and a 2D environment with compositional structure. As baselines, we consider the state-of-the-art model-free actor-critic algorithm built upon transformer architecture and the state-of-the-art monolithic model-based algorithm. While the proposed method demonstrates comparable performance to the baselines in easier tasks, it outperforms the baselines within the 1M environment step budget in more challenging tasks increased number of objects or more complex dynamics.
Supplementary Material: zip
Publication Agreement: pdf
Submission Number: 125
Loading