Context-Aware Entity Grounding with Open-Vocabulary 3D Scene GraphsDownload PDF

Published: 30 Aug 2023, Last Modified: 16 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Open-Vocabulary Semantic, Scene Graph, Object Grounding
Abstract: We present an Open-Vocabulary 3D Scene Graph (OVSG), a formal framework for grounding a variety of entities, such as object instances, agents, and regions, with free-form text-based queries. Unlike conventional semantic-based object localization approaches, our system facilitates context-aware entity localization, allowing for queries such as “pick up a cup on a kitchen table” or “navigate to a sofa on which someone is sitting”. In contrast to existing research on 3D scene graphs, OVSG supports free-form text input and open-vocabulary querying. Through a series of comparative experiments using the ScanNet dataset and a self-collected dataset, we demonstrate that our proposed approach significantly surpasses the performance of previous semantic-based localization techniques. Moreover, we highlight the practical application of OVSG in real-world robot navigation and manipulation experiments. The code and dataset used for evaluation will be made available upon publication.
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
TL;DR: Combing Open-vocabulary feature with 3D scene-graph to enable a context-aware entity locating within a scene.
Video: https://youtu.be/2LPmhCo8Xuk?si=mjTug5zUlw97thAC
Website: https://ovsg-l.github.io/
Code: https://github.com/changhaonan/OVSG
Publication Agreement: pdf
Poster Spotlight Video: mp4
4 Replies

Loading