EXCOST: Semi-Supervised Classification with Exemplar-Contrastive Self-Training

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Semi-supervised Learning, Image Classification, Cognitive Psychology
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Inspired by cognitive psychology, we propose a self-training algorithm named EXCOST and a regularization term named CIL.
Abstract: Similar to the way of human learning, the aim of semi-supervised learning (SSL) method is to harness vast unlabeled data alongside a limited set of labeled samples. Inspired by theories of category representation in cognitive psychology, an innovative SSL algorithm named Exemplar-Contrastive Self-Training (EXCOST) is proposed in this paper. This algorithm ascertains pseudo-labels for unlabeled samples characterized by both substantial confidence and exemplar similarity, subsequently leveraging these pseudo-labels for self-training. Furthermore, a novel regularization term named Category-Invariant Loss (CIL) is applied for SSL. CIL promotes the generation of consistent class probabilities across different representations of the same sample under various perturbations, such as rotation or translation. Notably, the proposed approach does not depend on either the prevalent weak and strong data augmentation strategy or the use of exponential moving average (EMA). The efficacy of the proposed EXCOST is demonstrated through comprehensive evaluations on semi-supervised image classification tasks, where it attains state-of-the-art performance on benchmark datasets, including MNIST with 2, 5 and 10 labels per class, SVHN with 25 labels per class, and CIFAR-10 with 25 labels per class.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: zip
Submission Number: 5837
Loading