Abstract: Hierarchical Reinforcement Learning (HRL) has the potential to simplify the solution of environments with long horizons and sparse rewards. The idea behind HRL is to decompose a complex decision-making problem into smaller, manageable sub-problems, allowing an agent to learn more efficiently and effectively. In this thesis, we aim to contribute to the field of HRL through the study of state space partition representations. We aim to discover representations that allow decomposing a complex state space in a set of small interconnected partitions. We start our work by presenting which are the properties of ideal state space partitions for HRL and then proceed to explore different methods for creating such partitions. We present algorithms able to leverage such representations to learn more effectively in sparse reward settings. Finally, we show how to combine the learned representation with Goal-Conditioned Reinforcement Learning (GCRL) and additionally we present state representations useful for GCRL.
Loading