Keywords: Planning with Gestures, Human-Robot Interaction, LLM Reasoning
TL;DR: We propose a framework, GIRAF, for more flexibly interpreting human gesture and language instructions by leveraging the power of large language models.
Abstract: Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for three table-top manipulation tasks and demonstrate that it is both effective and preferred by users. We further demonstrate GIRAF’s ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved 81% success rate on finding the correct plan for tasks in GestureInstruct.
Videos and datasets can be found on our project website: https://tinyurl.com/giraf23
Student First Author: yes
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Video: https://drive.google.com/file/d/1BvaktMcA4m0-Kne2lmm6b0gH7a4PyrGg/view
Website: https://drive.google.com/file/d/1BvaktMcA4m0-Kne2lmm6b0gH7a4PyrGg/view
Publication Agreement: pdf
Poster Spotlight Video: mp4
16 Replies
Loading