Keywords: Novel class discovery, General category discovery, Self-supervised learning, Label propagation
TL;DR: Our approach seeks to discover known and unknown classes in the unlabelled datasets using affinity relationships between samples via auxiliary prompts.
Abstract: Recent advances in semi-supervised learning (SSL) have achieved remarkable success in learning with partially labeled in-distribution data. However, many existing SSL models fail to learn on unlabeled data sampled from novel semantic classes and thus rely on the closed-set assumption. In this work, we adopt the open-set SSL setting and target a pragmatic but under-explored generalized category discovery (GCD) setting. The GCD setting aims to categorize unlabeled training data coming from known or unknown novel classes by leveraging the information in the labeled data. We propose a two-stage contrastive affinity learning method with auxiliary visual prompts, dubbed PromptCAL, to address this challenging problem, which can discover reliable affinities between labeled and unlabelled samples to further learn better clusters for both known and novel classes. Specifically, we first embed learnable visual prompts into a pre-trained visual transformer (ViT) backbone and supervise these prompts with an auxiliary loss to reinforce semantic discriminativeness and learn generalizable affinity relationships. Secondly, we propose an affinity-based contrastive loss based on an iterative semi-supervised affinity propagation process which can further enhance the benefits of prompt supervision. Extensive experimental evaluation on six benchmarks demonstrates that our method is effective in discovering novel classes even with limited annotations and surpasses the current state-of-the-art on six benchmark dataset (with more than 10% on CUB and StanfordCars, and significant margin on ImageNet-100). Our code and models will be publicly released.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Community Implementations: [ 1 code implementation](https://www.catalyzex.com/paper/promptcal-contrastive-affinity-learning-via/code)
5 Replies
Loading