Learning Visuo-Haptic Skewering Strategies for Robot-Assisted FeedingDownload PDF

Published: 10 Sept 2022, Last Modified: 05 May 2023CoRL 2022 OralReaders: Everyone
Keywords: Assistive Feeding, Deformable Manipulation, Multisensory Learning
TL;DR: We develop a zero-shot bite acquisition framework for robot-assisted feeding by learning reactive, multimodal skewering strategies.
Abstract: Acquiring food items with a fork poses an immense challenge to a robot-assisted feeding system, due to the wide range of material properties and visual appearances present across food groups. Deformable foods necessitate different skewering strategies than firm ones, but inferring such characteristics for several previously unseen items on a plate remains nontrivial. Our key insight is to leverage visual and haptic observations during interaction with an item to rapidly and reactively plan skewering motions. We learn a generalizable, multimodal representation for a food item from raw sensory inputs which informs the optimal skewering strategy. Given this representation, we propose a zero-shot framework to sense visuo-haptic properties of a previously unseen item and reactively skewer it, all within a single interaction. Real-robot experiments with foods of varying levels of visual and textural diversity demonstrate that our multimodal policy outperforms baselines which do not exploit both visual and haptic cues or do not reactively plan. Across 6 plates of different food items, our proposed framework achieves 71% success over 69 skewering attempts total. Supplementary material, code, and videos can be found on our website: https://sites.google.com/view/hapticvisualnet-corl22/home.
Student First Author: yes
Supplementary Material: zip
Website: https://sites.google.com/view/hapticvisualnet-corl22/home
Code: https://github.com/priyasundaresan/hapticvisualnet
17 Replies

Loading