Keywords: Safe Learning for Control; Control Systems; Neural Control Barrier Certificates
TL;DR: Novel formulation of formally correct neural control barrier certificates, which results in scalability to higher dimensional systems and neural networks, and time effeciency.
Abstract: The design of controllers with correctness guarantees is a primary concern for safety-critical control systems.
A Control Barrier Certificate (CBC) is a real-valued function over the state space of the system that provides an inductive proof of the existence of a safe controller.
Recently, neural networks have been successfully deployed for data-driven learning of control barrier certificates.
These approaches encode the conditions for the existence of a CBC using a rectified linear unit (ReLU) loss function.
The resulting encoding, while sound, tends to be conservative, which results in slower training and limits scalability to large, complex systems.
Can altering the loss function alleviate some of the problems associated with ReLU loss and lead to faster learning?
This paper proposes a novel encoding with a Mean Squared Error (MSE) loss function, which allows for more scalable and efficient training, while addressing some of the theoretical limitations of previous methods.
The proposed approach derives a validity condition based on Lipschitz continuity to formally characterize safety guarantees, eliminating the need for a post-hoc verification.
The effectiveness of the proposed loss functions is demonstrated through six case studies curated from the existing state of the art.
Our results provide a compelling argument for exploring alternative loss function choices as a novel approach to optimizing the design of control barrier certificates.
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8460
Loading