Compositional Zero-Shot Learning for Attribute- Based Object Reference in Human-Robot Interaction

Published: 03 Nov 2023, Last Modified: 09 Jan 2024CRL_WS PosterEveryoneRevisionsBibTeX
Keywords: Object Reference, Zero-Shot Learning, Human-Robot Interaction
Abstract: Language-enabled robots have been widely studied over the past years to enable natural human-robot interaction and teaming in various real-world applications. Language-enabled robots must be able to comprehend referring expressions to identify a particular object from visual perception using a set of referring attributes extracted from natural language. However, visual observations of an object may not be available when it is referred to, and the number of objects and attributes may also be unbounded in open worlds. To address the challenges, we implement an attribute-based compositional zero-shot learning method that uses a list of attributes to perform referring expression comprehension in open worlds. We evaluate the approach on two datasets including the MIT-States and the Clothing 16K. The preliminary experimental results show that our implemented approach allows a robot to correctly identify the objects referred to by human commands.
Submission Number: 7
Loading