Safe and Efficient Operation with Constrained Hierarchical Reinforcement LearningDownload PDF

Published: 20 Jul 2023, Last Modified: 30 Aug 2023EWRL16Readers: Everyone
Keywords: Hierarchical Reinforcement Learning, Safety, Constrained Reinforcement Learning
TL;DR: We propose a constrained HRL framework that projects low-level actions to a safe set to grant a safe operation.
Abstract: Hierarchical Reinforcement Learning (HRL) holds the promise of enhancing sample efficiency and generalization capabilities of Reinforcement Learning (RL) agents by leveraging task decomposition and temporal abstraction, which aligns with human reasoning. However, the adoption of HRL (and RL in general) to solve problems in the real world has been limited due to, among other reasons, the lack of effective techniques that make the agents adhere to safety requirements encoded as constraints, a common practice to define the functional safety of safety-critical systems. While some constrained Reinforcement Learning methods exist in the literature, we show that regular flat policies can face performance degradation when dealing with safety constraints. To overcome this limitation, we propose a constrained HRL topology that separates planning and control, with constraint optimization achieved at the lower-level abstraction. Simulation experiments show that our approach is able to keep its performance while adhering to safety constraints, even in scenarios where the flat policy's performance deteriorates when trying to prioritize safety.
1 Reply

Loading