Unsupervised Semantic Segmentation by Distilling Feature CorrespondencesDownload PDF

29 Sept 2021, 00:33 (edited 16 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Unsupervised Semantic Segmentation, Unsupervised Learning, Deep Features, Contrastive Learning, Visual Transformers, Cocostuff, Cityscapes, Semantic Segmentation
  • Abstract: Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation. To solve this task, algorithms must produce features for every pixel that are both semantically meaningful and compact enough to form distinct clusters. Unlike previous works which achieve this with a single end-to-end framework, we propose to separate feature learning from cluster compactification. Empirically, we show that current unsupervised feature learning frameworks already generate dense features whose correlations are semantically consistent. This observation motivates us to design STEGO ($\textbf{S}$elf-supervised $\textbf{T}$ransformer with $\textbf{E}$nergy-based $\textbf{G}$raph $\textbf{O}$ptimization), a novel framework that distills unsupervised features into high-quality discrete semantic labels. At the core of STEGO is a novel contrastive loss function that encourages features to form compact clusters while preserving their association pattern. STEGO yields a significant improvement over the prior state of the art, on both the CocoStuff ($\textbf{+14 mIoU}$) and Cityscapes ($\textbf{+9 mIoU}$) semantic segmentation challenges.
  • One-sentence Summary: We use the correlations between self-supervised visual features to perform unsupervised semantic segmentation.
15 Replies

Loading