Keywords: Reinforcement learning, Safety, Distributional Critic
TL;DR: We propose a safe reinforcement learning method based on the trust region method and distributional critics.
Abstract: To apply reinforcement learning (RL) to real-world practical applications, agents are required to adhere to the safety guidelines of their respective domains.
Safe RL can effectively handle the guidelines by maximizing returns while maintaining safety satisfaction.
In this paper, we develop a safe distributional RL method based on the trust region method which has the capability of satisfying safety constraints consistently.
However, importance sampling required for the trust region method can hinder performance due to its significant variance, and policies may not meet the safety guidelines due to the estimation bias of distributional critics.
Hence, we enhance safety performance through the following approaches.
First, we propose novel surrogates for the trust region method expressed with Q-functions using the reparameterization trick.
Second, we utilize distributional critics trained with a target distribution where bias-variance can be traded off.
In addition, if an initial policy violates safety constraints, there can be no policy satisfying safety constraints within the trust region.
Thus, we propose a gradient integration method which is guaranteed to find a policy satisfying multiple constraints from an unsafe initial policy.
From extensive experiments, the proposed method shows minimal constraint violations while achieving high returns compared to existing safe RL methods.
Furthermore, we demonstrate the benefit of safe RL for problems in which the reward function cannot be easily specified.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
Supplementary Material: zip
27 Replies
Loading