A Multimodal Teach-in Approach to the Pick-and-Place Problem in Human-Robot CollaborationOpen Website

Published: 01 Jan 2023, Last Modified: 05 Nov 2023HRI (Companion) 2023Readers: Everyone
Abstract: Teaching robotic systems how to carry out a task in a collaborative environment still presents a challenge. This is because replicating natural human-to-human interaction requires the availability of interaction modalities that allow conveying complex information. Speech, gestures, gaze-based interactions as well as directly guiding a robotic system count towards such modalities that yield the potential to enable smooth multimodal human-robot interaction. This paper presents a conceptual approach for multimodally teaching a robotic system how to pick-and-place an object, one of the fundamental tasks not only in robotics, but in everyday life. By establishing task and dialogue model separately, we aim to split robot/task logic from interaction logic and to achieve modality independence for the teaching interaction. Finally, we elaborate on an experimental implementation of our models for multimodally teaching a UR-10 robot arm how to pick-and-place an object.
0 Replies

Loading