CAPT: Contrastive Pre-Training based Semi-Supervised Open-Set LearningDownload PDFOpen Website

2022 (modified: 22 Dec 2022)MIPR 2022Readers: Everyone
Abstract: Deep Semi-Supervised Learning (SSL) has shown very effective performance in recent years, such methods are typically under assumptions of a closed-world setting, where instances in the labeled and unlabeled data share the same class set. However, in real-world applications, samples with novel classes may be contained in the unlabeled data during the model deployment (open-set scenario), i.e. new types of scene images may occur in a self-driving system. In this paper, we advocate a two-stage semi-supervised learning approach CAPT, a framework for handling this realistic sce-nario based on a self-supervised pre-training step. The key idea is to introduce the embedding from pre-training into the SSL open-set classifier, so that the model can recognize the seen classes and cluster the instances from novel categories simultaneously. Our framework first pre-train a semantically meaningful representation of all samples from the labeled and unlabeled dataset. Next, CAPT applies the learned embedding as initialization to build a semisupervised classifier for clustering novel classes. We thoroughly evaluate our framework on large-scale image benchmarks CIFAR10, CIFAR100, obtaining state-of-the-art results.
0 Replies

Loading