CLIP-Fields: Weakly Supervised Semantic Fields for Robotic MemoryDownload PDF

Published: 15 Nov 2022, Last Modified: 05 May 2023LangRob 2022 SpotlightReaders: Everyone
Keywords: semantic neural fields, CLIP, LLMs, VLMs, semantic navigation, neural fields, NeRF
TL;DR: Teaching robots in the real world to respond to natural language queries with zero human labels — using pretrained large language models (LLMs), visual language models (VLMs), and neural fields.
Abstract: We propose CLIP-Fields, an implicit scene model that can be trained with no direct human supervision. This model learns a mapping from spatial locations to semantic embedding vectors. The mapping can then be used for a variety of tasks, such as segmentation, instance identification, semantic search over space, and view localization. Most importantly, the mapping can be trained with supervision coming only from web-image and web-text trained models such as CLIP, Detic, and Sentence-BERT. When compared to baselines like Mask-RCNN, our method outperforms on few-shot instance identification or semantic segmentation on the HM3D dataset with only a fraction of the examples. Finally, we show that using CLIP-Fields as a scene memory, robots can perform semantic navigation in real-world environments. Our code and demonstrations are available here: https://mahis.life/clip-fields/
3 Replies

Loading