Leveraging Large Language Models to Repair High-level Robot Controllers from Assumption Violations

Published: 06 Mar 2025, Last Modified: 21 Mar 2025ICLR 2025 Workshop VerifAI PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Formal methods; Robotics; large language models
TL;DR: A hybrid approach that combines large language models (LLMs) and formal methods to speed up runtime repair of high-level robot controllers from assumption violations.
Abstract: This paper presents INPROVF, an automatic framework that combines large language models (LLMs) and formal methods to speed up the repair process of high-level robot controllers. Previous approaches based solely on formal methods are computationally expensive and cannot scale to large state spaces. In contrast, INPROVF uses LLMs to generate repair candidates, and formal methods to verify their correctness. To improve the quality of these candidates, our framework first translates the symbolic representations of the environment and controllers into natural language descriptions. If a candidate fails the verification, INPROVF provides feedback on potential unsafe behaviors or unsatisfied tasks, and iteratively prompts LLMs to generate improved solutions. We demonstrate INPROVF through 12 violations with various workspaces, tasks, and state space sizes.
Submission Number: 43
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview