Real- Time Grocery Packing by Integrating Vision, Tactile Sensing, and Soft Fingers

Published: 01 Jan 2024, Last Modified: 24 Feb 2025RoboSoft 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although bin packing has been a key benchmark task for robotic manipulation, the community has mainly focused on the placement of rigid rectilinear objects within the container. We address this by presenting a soft robotic hand that combines vision, motor-based proprioception, and soft tactile sensors to identify, sort, and pack a stream of unknown objects. This multimodal sensing approach enables our soft robotic manipulator to estimate an object's size and stiffness, allowing us to translate the ill-defined human conception of a “well-packed container” into attainable metrics. We demonstrate the effectiveness of this soft robotic system through a realistic grocery packing scenario, where objects of arbitrary shape, size, and stiffness move down a conveyor belt and must be placed intelligently to avoid crushing delicate objects. Combining tactile and proprioceptive feedback with external vision resulted in a significant reduction in item-damaging packing maneuvers compared to a sensorless baseline (9 x fewer) and vision-only (4.5 x fewer) techniques, successfully demonstrating how the integration of multiple sensing modalities within a soft robotic system can address complex manipulation applications.
Loading