Reducing Human-Robot Goal State Divergence with Environment Design

Published: 30 Apr 2024, Last Modified: 30 Apr 2024HAXP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-Robot Collaboration, Planning, Scheduling and Coordination, Human-Centric Robotics
TL;DR: The paper presents a new metric called Goal State Divergence (GSD) which uses design to align a robot’s final goal state with a human's expectations as closely as possible to reduce unwanted side effects, and empirically evaluates its effectiveness.
Abstract: One of the most difficult challenges in creating successful human-AI collaborations is aligning a robot’s behavior with a human user’s expectations. When this fails to occur, a robot may misinterpret their specified goals, prompting it to perform actions with unanticipated, potentially dangerous side effects. To avoid this, we propose a new metric we call Goal State Divergence (GSD), which represents the difference between a robot’s final goal state and the one a human user expected. In cases where GSD cannot be directly calculated, we show how it can be approximated using maximal and minimal bounds. We then input the GSD value into our novel human-robot goal alignment (HRGA) design problem, which identifies a minimal set of environment modifications that can prevent mismatches like this. To show the effectiveness of GSD for reducing differences between human-robot goal states, we empirically evaluate our approach on several standard benchmarks.
Submission Number: 5
Loading