Keywords: hierarchical reinforcement learning
TL;DR: We use a diverse ensemble to generalize a given sub-goal to relearn previously discovered skills.
Abstract: Transfer is a key promise of hierarchical reinforcement learning, but requires first learning transferable skills.
For an agent to effectively transfer a skill it must identify features that generalize and define the skill over this subset.
However, this task is under-specified from a single context as the agent has no prior knowledge of what future tasks may be introduced.
Since successful transfer requires a skill to reliably achieve a sub-goal from different states, we focus our attention on ensuring sub-goals are represented in a transferable way.
For each sub-goal, we train an ensemble of classifiers while explicitly incentivizing them to use minimally overlapping features.
Each ensemble member represents a unique hypothesis about the transferable features of a sub-goal that the agent can use to learn a skill in previously unseen portions of the environment.
Environment reward then determines which hypothesis is most transferable for the given task, based on the intuition that useful sub-goals lead to better reward maximization.
We apply these reusable sub-goals to MiniGrid and Montezuma's Revenge, allowing us to learn previously defined skills in unseen parts of the state-space.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11553
Loading