Scaling the Heights of Learning with Hierarchical Approaches in Reinforcement Learning

28 Sept 2024 (modified: 10 Oct 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation Learning, Reinforcement Learning, Temporal Abstraction, Cumulative Reward Optimization
TL;DR: We propose a representation learning model that differentiates short-term actions from long-term goals, optimizing cumulative rewards and enabling efficient transfer learning in reinforcement learning.
Abstract: This research explores a novel hierarchical representation learning framework designed to enhance planning and reinforcement learning (RL) in complex environments. By decoupling high-level decision-making from low-level control actions, our framework significantly improves sample efficiency and transfer learning performance across diverse tasks. We validate our approach through experiments in various environments, including Meta-World, AirSim, and Habitat AI, demonstrating that our hierarchical model consistently outperforms traditional flat models in cumulative rewards and adaptability to new tasks. This work lays the foundation for scalable AI systems capable of navigating the complexities of real-world applications.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12954
Loading