CONTROL: A Contrastive Learning Framework for Open World Semi-Supervised Learning

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Contrastive Learning; Semi-Supervised Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We design a unified contrastive learning framework that can further gain performance improvement for OWSSL algorithms.
Abstract: In recent years, open-world semi-supervised Learning has received tremendous attention. This is largely due to the fact that unlabeled real-world data often encompasses unseen classes -- those that are not represented in labeled datasets. Such classes can adversely affect the performance of traditional semi-supervised learning methods. The open-world semi-supervised learning algorithms are designed to enable models to distinguish between both seen and unseen classes. However, existing algorithms still suffer from the problem of insufficient classification of unseen classes and may face the risk of representation collapse. In order to better address the aforementioned issues, we propose a contrastive learning framework called CONTROL that integrates three optimization objectives: nearest neighbor contrastive learning, supervised contrastive learning, and unsupervised contrastive learning. The significance of the framework is explained by theoretically proving the optimization of contrastive learning at the feature level benefits unseen classification, and the uniformity mechanism in contrastive learning further helps to prevent representation collapse. Serving as a unified and efficient framework, CONTROL is compatible with a broad range of existing open-world semi-supervised learning algorithms. Through empirical studies, we highlight the superiority of CONTROL over prevailing state-of-the-art open-world semi-supervised learning algorithms. Remarkably, our method achieves significant improvement in both unseen class classification and all class classification over previous methods on both CIFAR and ImageNet datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3214
Loading