Spot-Compose: A Framework for Open-Vocabulary Object Retrieval and Drawer Manipulation in Point Clouds

Published: 16 Apr 2024, Last Modified: 02 May 2024MoMa WS 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: mobile manipulation, robotics, grasping, 3d vision, computer vision
TL;DR: A framework for open-vocabulary object retrieval and drawer manipulation in point clouds.
Abstract: In recent years, modern techniques in deep learning and large-scale data sets have led to impressive progress in 3D instance segmentation, grasp pose estimation, and robotics. This allows for accurate detection directly in 3D scenes, object- and environment-aware grasp prediction, as well as stable and repeatable robotic manipulation. This work aims to integrate these cutting-edge methods into a comprehensive framework for robotic interaction in human-centric environments. Specifically, we leverage high-resolution point clouds from a commodity scanner for open-vocabulary instance segmentation, alongside grasp pose estimation, to demonstrate dynamic picking of objects and opening of drawers. We show the performance and robustness of our model in two sets of real-world experiments evaluating dynamic object retrieval and drawer opening, reporting a 51% and 82% success rate respectively. To encourage further development of and experimentation with our framework, we make the code and videos available at https://spot-compose.github.io/.
Submission Number: 4
Loading