Concrete-to-Abstract Goal Embeddings for Self-Supervised Reinforcement Learning

ICLR 2026 Conference Submission25181 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: self-supervised reinforcement learning, goal representation learning, goal abstraction
Abstract: Self-supervised reinforcement learning (RL) aims to train agents without pre-specified external reward functions, enabling them to autonomously acquire the ability to generalize across tasks. A common substitute for external rewards is the use of observational goals sampled from experience, especially in goal-conditioned RL. However, such goals often constrain the goal space: they may be too concrete (requiring exact pixel-level matches) or too abstract (involving ambiguous observations), depending on the observation structure. Here we propose a unified hierarchical goal space that integrates both concrete and abstract goals. Observation sequences are encoded into this partially ordered space, in which a subset relation naturally induces a hierarchy from concrete to abstract goals. This encoding enables agents to disambiguate specific states while also generalizing to shared concepts. We implement this approach using a recurrent neural network to encode sequences and an energy function to learn the partial order, trained end-to-end with contrastive learning. The energy function then allows to traverse the induced hierarchy to vary the degree of abstraction. In experiments on navigation and robotic manipulation, agents trained with our hierarchical goal space achieve higher task success and greater generalization to novel tasks compared to agents limited to purely observational goals.
Primary Area: reinforcement learning
Submission Number: 25181
Loading