MOPA: a Minimalist Off-Policy Approach to Safe-RLDownload PDF

08 Oct 2022, 17:46 (modified: 09 Dec 2022, 14:32)Deep RL Workshop 2022Readers: Everyone
Keywords: Recurrent Models for RL, Safe-RL, Early Terminated MDP
TL;DR: We propose a minialism approach for safe-RL.
Abstract: Safety is one of the crucial concerns for the real-world application of reinforcement learning (RL). Previous works consider the safe exploration problem as Constrained Markov Decision Process (CMDP), where the policies are being optimized under constraints. However, when encountering any potential danger, human tends to stop immediately and rarely learns to behave safely in danger. Moreover, the off-policy learning nature of humans guarantees high learning efficiency in risky tasks. Motivated by human learning, we introduce a Minimalist Off-Policy Approach (MOPA) to address Safe-RL problem. We first define the Early Terminated MDP (ET-MDP) as a special type of MDPs that has the same optimal value function as its CMDP counterpart. An off-policy learning algorithm MOPA based on recurrent models is then proposed to solve the ET-MDP, which thereby solves the corresponding CMDP. Experiments on various Safe-RL tasks show a substantial improvement over previous methods that directly solve CMDP, in terms of higher asymptotic performance and better learning efficiency.
Supplementary Material: zip
0 Replies