Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot Agents

Published: 21 Oct 2023, Last Modified: 03 Nov 2023LangRob @ CoRL 2023 PosterEveryoneRevisionsBibTeX
Keywords: LLM agent, language grounding, formal method
TL;DR: A safety module for LLM-driven robot agents that allows encoding safety constraint in natural language, pruning unsafe actions, and assisting the agents to re-generate the plan.
Abstract: Recent advancements in large language models (LLMs) have enabled a new research domain, LLM agents, for solving robotics and planning tasks by leveraging the world knowledge and general reasoning abilities of LLMs obtained during pretraining. However, while considerable effort has been made to teach the robot the "dos'', the "don'ts" received relatively less attention. We argue that, for any practical usage, it is as crucial to teach the robot the "don'ts": conveying explicit instructions about prohibited actions, assessing the robot's comprehension of these restrictions, and, most importantly, ensuring compliance. Moreover, verifiable safe operation is essential for deployments that satisfy worldwide standards such as ISO 61508, which defines standards for safely deploying robots in industrial factory environments worldwide. Aiming at deploying the LLM agents in a collaborative environment, we propose a queryable safety constraint module based on linear temporal logic (LTL) that simultaneously enables natural language (NL) to temporal constraints encoding, safety violation reasoning and explaining, and unsafe action pruning. To demonstrate the effectiveness of our system, we conducted experiments in VirtualHome environment and on a real robot. The experimental results show that our system strictly adheres to the safety constraints and scales well with complex safety constraints, highlighting its potential for practical utility.
Submission Number: 17
Loading