Formal Methods in Robot Policy Learning and Verification: A Survey on Current Techniques and Future Directions
Abstract: As hardware and software systems have grown in complexity, formal methods have been indispensable tools for rigorously specifying acceptable behaviors, synthesizing programs to meet these specifications, and validating the correctness of existing programs. In the field of robotics, a similar trend of rising complexity has emerged, driven in large part by the adoption of deep learning. While this shift has enabled the development of highly performant robot policies, their implementation as deep neural networks has posed challenges to traditional formal analysis, leading to models that are inflexible, fragile, and difficult to interpret. In response, the robotics community has introduced new formal and semi-formal methods to support the precise specification of complex objectives, guide the learning process to achieve them,and enable the verification of learned policies against them. In this survey, we provide a comprehensive overview of how formal methods have been used in recent robot learning research. We organize our discussion around two pillars: policy learning and policy verification. For both, we highlight representative techniques, compare their scalability and expressiveness, and summarize how they contribute to meaningfully improving realistic robot safety and correctness. We conclude with a discussion of remaining obstacles for achieving that goal and promising directions for advancing formal methods in robot learning.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Added references for specification based runtime monitoring methods in section 4.4
Modified language in section 5.3 paragraph 3
Changed "richer" and "misalignment" in introduction to be more clear. Added a definition for "dynamically feasible"
Assigned Action Editor: ~Oleg_Arenz1
Submission Number: 5454
Loading