OpenSU3D: Open World 3D Scene Understanding using Foundation Models

Published: 01 Oct 2024, Last Modified: 03 Dec 2024BoB Workshop 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: open world learning, 2D-to-3D knowledge transfer, open vocabulary understanding, 3D scene understanding, spatial reasoning, 3D vision and language
TL;DR: OpenSU3D creates open-set 3D scene representations leveraging 2D foundation models, enabling open-world 3D scene understanding tasks, such open-set annotation and segmentation, open-vocabulary instance retrieval, and spatial reasoning.
Abstract: In this paper, we present a novel, scalable approach for constructing open set, instance-level 3D scene representations, advancing open world understanding of 3D environments. Existing methods require pre-constructed 3D scenes and face scalability issues due to per-point feature representation, additionally struggle with contextual queries. Our method overcomes these limitations by incrementally building instance-level 3D scene representations using 2D foundation models, and efficiently aggregating instance-level details such as masks, feature vectors, names, and captions. We introduce fusion schemes for feature vectors to enhance their contextual knowledge and performance on complex queries. Additionally, we explore large language models for robust automatic annotation and spatial reasoning tasks. We evaluate our proposed approach on multiple scenes from ScanNet and Replica datasets demonstrating zero-shot generalization capabilities, exceeding current state-of-the-art methods in open world 3D scene understanding. Project page: https://opensu3d.github.io/
Submission Number: 7
Loading