Disentangling Controlled Effects for Hierarchical Reinforcement LearningDownload PDF

Published: 09 Feb 2022, Last Modified: 05 May 2023CLeaR 2022 PosterReaders: Everyone
Keywords: unsupervised reinforcement learning, reinforcement learning, causality
TL;DR: We use counterfactuals to build a hierarchy of controlled effects and use it for hierarchical reinforcement learning.
Abstract: Exploration and credit assignment are still challenging problems for RL agents under sparse rewards. We argue that these challenges arise partly due to the intrinsic rigidity of operating at the level of actions. Actions can precisely define how to perform an activity but are ill-suited to describe what activity to perform. Instead, controlled effects describe transformations in the environment caused by the agent. These transformations are inherently composable and temporally abstract, making them ideal for descriptive tasks. This work introduces CEHRL, a hierarchical method leveraging the compositional nature of controlled effects to expedite the learning of task-specific behavior and aid exploration. Borrowing counterfactual and normality measures from causal literature, CEHRL learns an implicit hierarchy of transformations an agent can perform on the environment. This hierarchy allows a high-level policy to set temporally abstract goals and, by doing so, long-horizon credit assignment. Experimental results show that using effects instead of actions provides a more efficient exploration mechanism. Moreover, by leveraging prior knowledge in the hierarchy, CEHRL assigns credit to few effects instead of many actions and consequently learns tasks more rapidly.
9 Replies

Loading