A Survey of State Representation Learning for Deep Reinforcement Learning

TMLR Paper4501 Authors

17 Mar 2025 (modified: 20 Jun 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Representation learning methods are an important tool for addressing the challenges posed by complex observations spaces in sequential decision making problems. Recently, many methods have used a wide variety of types of approaches for learning meaningful state representations in reinforcement learning, allowing better sample efficiency, generalization, and performance. This survey aims to provide a broad categorization of these methods within a model-free online setting, exploring how they tackle the learning of state representations differently. We categorize the methods into six main classes, detailing their mechanisms, benefits, and limitations. Through this taxonomy, our aim is to enhance the understanding of this field and provide a guide for new researchers. We also discuss techniques for assessing the quality of representations, and detail relevant future directions.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: /forum?id=fE8LRT2XKe
Changes Since Last Submission:

The previous version we submitted was importing the geometry package, which was already preconfigured by the TMLR module. This led to conflicting settings (e.g., the top line was way too long). We have fixed that problem with the current version.

Assigned Action Editor: Emmanuel Bengio
Submission Number: 4501
Loading