STL-Drive: Formal Verification Guided End-to-end Automated Driving

27 Sept 2024 (modified: 12 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Formal Verification, Automated Driving, Imitation Learning, Robustness, Safety
TL;DR: This framework enhances end-to-end driving model safety by applying Signal Temporal Logic (STL) and Responsibility-Sensitive Safety (RSS) with a custom loss function, improving both performance and safety in simulations and real-world data.
Abstract: End-to-end automated driving behavior models require extensive training data from machine or human driver experts or interacting with the environment to learn a driving policy. Not all human driver expert data represent safe driving that the end-to-end model is learning to imitate, and similarly, neither are some of the behaviors learned during exploration while learning by trial and error. However, the models should learn from such data without being negatively affected during the learning process. We aim to provide a learning framework to incorporate formal verification methods to improve the robustness and safety of the learned models in the presence of training data that contain unsafe behaviors, dubbed as STL-Drive. We are particularly interested in utilizing this framework to enhance the safety of end-to-end automated driving models. In this work, we incorporate Signal Temporal Logic (STL) as the formal method to impose safety constraints. In addition, we utilize the Responsibility-Sensitive Safety (RSS) framework to define the safety constraints. We designed a loss function that combines the task objectives and the STL robustness score to balance the learned policy's performance and safety. We demonstrate that encoding safety constraints using STL and utilizing the robustness score during training improves the performance and safety of the driving policy. We validate our framework using open-loop predictive simulator NAVSIM and real-world data from OpenScene. The results of this study suggest a promising research direction where formal methods can enhance the safety and resilience of deep learning models. Formal verification of safety constraints for automated driving will further increase the public's trust in automated vehicles.
Primary Area: applications to robotics, autonomy, planning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11281
Loading