Abstract: Safe controller synthesis is crucial for safety-critical applications. This paper presents a novel reinforcement learning approach to synthesize safe controllers for NN-controlled systems. The core idea leverages an iterative scheme that combines controller learning with neural barrier certificate (BC) verification, ultimately producing a provably safe deep neural network (DNN) controller with formal safety guarantees. The process begins by pre-training a well-performing DNN controller as an “oracle” via deep reinforcement learning (DRL). To formally verify the safety properties of the closed-loop system under the base controller, we devise a formal verification procedure that approximates the DNN controller using polynomial inclusion, followed by synthesizing neural BCs via sum-of-squares (SOS) relaxation. In cases where the base controller is insufficient to yield a real BC, the current spurious BC is incorporated as an additional penalty term to reshape the RL reward function, guiding the iterative refinement for new controllers. We implement an automated tool, NBCRL, and experimental results demonstrate the benefits of our method in terms of efficiency and scalability even for a nonlinear system with dimension up to 12.
External IDs:doi:10.1109/tcad.2025.3616856
Loading