Abstract: Safety is one of the main challenges in applying reinforcement learning to tasks in realistic environments. To ensure safety during and after the training process, existing methods tend to adopt overly conservative policies to avoid unsafe situations. However, an overly conservative policy severely hinders exploration. In this letter, we propose a method to construct a boundary that discriminates between safe and unsafe states. The boundary we construct is equivalent to distinguishing dead-end states, indicating the maximum extent to which safe exploration is guaranteed, and thus has a minimum limitation on exploration. Similar to Recovery Reinforcement Learning, we utilize a decoupled RL framework to learn two policies, (1) a task policy that only considers improving the task performance, and (2) a recovery policy that maximizes safety. The recovery policy and a corresponding safety critic are pre-trained on an offline dataset, in which the safety critic evaluates the upper bound of safety in each state as awareness of environmental safety for the agent. During online training, a behavior correction mechanism is adopted, ensuring the agent interacts with the environment using safe actions only. Finally, experiments of continuous control tasks demonstrate that our approach has better task performance with fewer safety violations than state-of-the-art algorithms.
Loading