Safety Verification of Decision-Tree Policies in Continuous Time

Published: 21 Sept 2023, Last Modified: 08 Jan 2024NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: safety verification, decision tree, reinforcement learning, controller, continuous time
TL;DR: We present the first approach to safety verification for continuous-time decision-tree policies.
Abstract: Decision trees have gained popularity as interpretable surrogate models for learning-based control policies. However, providing safety guarantees for systems controlled by decision trees is an open challenge. We show that the problem is undecidable even for systems with the simplest dynamics, and PSPACE-complete for finite-horizon properties. The latter can be verified for discrete-time systems via bounded model checking. However, for continuous-time systems, such an approach requires discretization, thereby weakening the guarantees for the original system. This paper presents the first algorithm to directly verify decision-tree controlled system in continuous time. The key aspect of our method is exploiting the decision-tree structure to propagate a set-based approximation through the decision nodes. We demonstrate the effectiveness of our approach by verifying safety of several decision trees distilled to imitate neural-network policies for nonlinear systems.
Submission Number: 4791