Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels

Published: 25 Sept 2024, Last Modified: 23 Dec 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: semantic segmentation, open-vocabulary segmentation, vision-language models, fine-tuning foundation models
TL;DR: We fine-tune CLIP for open-vocabulary semantic segmentation only from unlabeled masks without any semantic information, generated from SAM and DINO.
Abstract: Large-scale vision-language models like CLIP have demonstrated impressive open-vocabulary capabilities for image-level tasks, excelling in recognizing what objects are present. However, they struggle with pixel-level recognition tasks like semantic segmentation, which require understanding where the objects are located. In this work, we propose a novel method, PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding by guiding the model on where, which is achieved using unlabeled images and masks generated from vision foundation models such as SAM and DINO. To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm using learnable class names to acquire general semantic concepts. PixelCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods in open-vocabulary semantic segmentation.
Primary Area: Machine vision
Submission Number: 8983
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview