LAN-grasp: An Effective Approach to Semantic Object Grasping Using Large Language Models

Published: 05 Apr 2024, Last Modified: 17 Apr 2024VLMNM 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: semantic object grasping, zero-shot object grasping, Large Language Models, Vision-Language Models
TL;DR: In this paper, we leverage the combination of a Large Language Model, a Vision Language Model, and a traditional grasp planner to generate zero-shot grasps demonstrating a deeper semantic understanding of the objects.
Abstract: In this paper, we propose LAN-grasp, a novel approach towards more appropriate semantic grasping. We use foundation models to provide the robot with a deeper understanding of the objects, the right place to grasp an object, or even the parts to avoid. This allows our robot to grasp and utilize objects in a more meaningful and safe manner. We leverage the combination of a Large Language Model, a Vision Language Model, and a traditional grasp planner to generate grasps demonstrating a deeper semantic understanding of the objects. We first prompt the Large Language Model about which object part is appropriate for grasping. Next, the Vision Language Model identifies the corresponding part in the object image. Finally, we generate grasp proposals in the region proposed by the Vision Language Model. Building on foundation models provides us with a zero-shot grasp method that can handle a wide range of objects without the need for further training or fine-tuning. We evaluated our method in real-world experiments on a custom object data set. We present the results of a survey that asks the participants to choose an object part appropriate for grasping. The results show that the grasps generated by our method are consistently ranked higher by the participants than those generated by a conventional grasping planner and a recent semantic grasping approach.
Supplementary Material: zip
Submission Number: 17
Loading