Keywords: neural CBF, out-of-distribution analysis
Abstract: Learning-based synthesis of control barriers is an emerging approach to certifying safety for robotic systems.
Yet, its effectiveness hinges on self-annotation, i.e., how to assign provisional safety labels to the states with no expert ground truth.
The prevailing pipeline annotates each unlabeled sample by forward-simulating it for a short horizon and trusting the network predictions along that rollout,
a procedure that is unreliable when the model has not yet generalized.
This paper introduces an out-of-distribution-aware self-annotation framework that conditions every provisional label on both the predicted barrier value and a calibrated OOD score measuring how closely the query state lies on the network’s training manifold.
We conduct hardware experiments to evaluate the proposed method.
With a limited amount of real-world data, it achieves state-of-the-art performance for static and dynamic obstacle avoidance, demonstrating statistically safer and less conservative maneuvers compared to existing methods.
Supplementary Material: zip
Submission Number: 37
Loading