Safe Reinforcement Learning for NN-controlled Systems with Neural Barrier Certificate Guidance

Hanrui Zhao, Mengxin Ren, Banglong Liu, Niuniu Qi, Xia Zeng, Zhenbing Zeng, Zhengfeng Yang

Published: 01 Jan 2025, Last Modified: 29 Jan 2026IEEE Transactions on Computer-Aided Design of Integrated Circuits and SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Safe controller synthesis is crucial for safety-critical applications. This paper presents a novel reinforcement learning approach to synthesize safe controllers for NN-controlled systems. The core idea leverages an iterative scheme that combines controller learning with neural barrier certificate (BC) verification, ultimately producing a provably safe deep neural network (DNN) controller with formal safety guarantees. The process begins by pre-training a well-performing DNN controller as an “oracle” via deep reinforcement learning (DRL). To formally verify the safety properties of the closed-loop system under the base controller, we devise a formal verification procedure that approximates the DNN controller using polynomial inclusion, followed by synthesizing neural BCs via sum-of-squares (SOS) relaxation. In cases where the base controller is insufficient to yield a real BC, the current spurious BC is incorporated as an additional penalty term to reshape the RL reward function, guiding the iterative refinement for new controllers. We implement an automated tool, NBCRL, and experimental results demonstrate the benefits of our method in terms of efficiency and scalability even for a nonlinear system with dimension up to 12.
Loading