Learning Object-Centered Autotelic Behaviors with Graph Neural NetworksDownload PDF

Published: 23 Apr 2022, Last Modified: 22 Oct 2023ALOE@ICLR2022Readers: Everyone
Keywords: Goal-conditioned reinforcement learning, intrinsically motivated agents, graph neural networks, multi-object manipulation, semantic goals, curriculum learning
TL;DR: We show that coupling semantic relational goal spaces with GNN-based architectures for both the policy and the critic enables efficient transfer between skills in multi-object manipulation domains.
Abstract: Although humans live in an open-ended world with endless challenges, they do not have to learn from scratch whenever they encounter a new task. Rather, they have access to a handful of previously learned skills, which they rapidly adapt to new situations. In artificial intelligence, autotelic agents—which are intrinsically motivated to represent and set their own goals—exhibit promising skill transfer capabilities. However, their learning capabilities are highly constrained by their policy and goal space representations. In this paper, we propose to investigate the impact of these representations. We study different implementations of autotelic agents using four types of Graph Neural Networks policy representations and two types of goal spaces, either geometric or predicate-based. We show that combining object-centered architectures that are expressive enough with semantic relational goals enables an efficient transfer between skills and promotes behavioral diversity. We also release our graph-based implementations to encourage further research in this direction.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2204.05141/code)
1 Reply

Loading