Keywords: 3D Visual Grounding, VLM Agent, Zero-Shot
TL;DR: We present VLM-Grounder, a novel framework using vision-language models (VLMs) for zero-shot 3D visual grounding.
Abstract: 3D visual grounding is crucial for robots, requiring integration of natural language and 3D scene understanding. Traditional methods depend on supervised learning with 3D point clouds are limited by scarce datasets. Recently zero-shot methods leveraging LLMs have been proposed to address the data issue. While effective, these methods often miss detailed scene context, limiting their ability to handle complex queries. In this work, we present VLM-Grounder, a novel framework using vision-language models (VLMs) for zero-shot 3D visual grounding based solely on 2D images. VLM-Grounder dynamically stitches image sequences, employs a grounding and feedback scheme to find the target object, and uses a multi-view ensemble projection to accurately estimate 3D bounding boxes. Experiments on ScanRefer and Nr3D datasets show VLM-Grounder outperforms previous zero-shot methods, achieving 51.6\% Acc@0.25 on ScanRefer and 48.0\% Acc on Nr3D, without relying on 3D geometry or object priors.
Supplementary Material: zip
Video: https://youtu.be/loxJb2XSrMw
Website: https://runsenxu.com/projects/VLM-Grounder
Code: https://github.com/OpenRobotLab/VLM-Grounder
Publication Agreement: pdf
Student Paper: yes
Spotlight Video: mp4
Submission Number: 694
Loading