Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards

Published: 22 Oct 2024, Last Modified: 07 Nov 2024CoRL 2024 Workshop SAFE-ROL PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robot Safety, Safe Control, Robot Manipulation, Semantic Constraint Satisfaction
TL;DR: We present robot safety beyond obstacle avoidance by taking into account the semantics of the environment (e.g., that moving a cup of water above a laptop is unsafe as it can lead to undesirable spills).
Abstract: A robot's ability to understand and adhere to constraints recognized by humans as "common sense'' (e.g., "moving a cup of water above a laptop is unsafe as the water may spill" or "rotating a cup of water while moving is unsafe as it can lead to pouring its content") is crucial for ensuring safe interactions in human-centric environments. Recent advances in computer vision and machine learning have enabled robots to acquire a semantic understanding of and reason about their operating environments. While there is extensive literature on safe robot decision-making, semantic understanding is rarely integrated into these formulations. In this work, we propose a semantic safety filter framework to certify robot inputs with respect to both semantically defined constraints and geometrically defined safety constraints (e.g., environment-collision and self-collision constraints). In our proposed approach, given perception inputs, we build a semantic map of the 3D environment and leverage the contextual reasoning capabilities of large language models to infer semantically unsafe conditions. These semantically unsafe conditions are then mapped to safe actions through a control barrier certification formulation. We evaluated our semantic safety filter approach in pick-and-place tasks and teleoperated tabletop manipulation tasks, demonstrating its effectiveness in incorporating semantic constraints to ensure safe robot operation beyond collision avoidance.
Submission Number: 33
Loading