Keywords: Probabilistic Verification, Safe Reinforcement Learning, Robotic Navigation, Formal Methods, Markov Decision Processes, Model Checking, Safety-Critical AI
TL;DR: We present a method to provide formal, probabilistic safety guarantees for neural network controllers in crowded environments, bridging the gap between data-driven learning and verifiable assurance.
Abstract: Ensuring the safety of learned robotic controllers in crowded, uncertain environments remains a critical challenge. This paper introduces a novel framework for obtaining formal, probabilistic safety guarantees for black-box neural navigation policies. By constructing a Markov Decision Process abstraction that integrates a probabilistic pedestrian intention model with the neural policy's behavior, we enable rigorous verification using probabilistic model checking. Our method provides non-vacuous, quantitative upper bounds on collision probability, a significant improvement over the overly conservative guarantees of deterministic verification and the complete lack of formal assurance in pure learning-based approaches. Extensive experiments in benchmark simulators demonstrate that our guarantees are robust to model inaccuracies and offer a practical trade-off between computational cost and bound tightness, providing a crucial step toward certifiable embodied AI systems.
Submission Number: 19
Loading