Training strategies with unlabeled and few labeled examples under 1-pixel attack by combining supervised and self-supervised learningDownload PDF

26 May 2022 (modified: 05 May 2023)ICML 2022 Pre-training WorkshopReaders: Everyone
Keywords: feature learning, self-supervised learning, semi-supervised learning
TL;DR: Motivated by scenarios where the proportion of labeled data with respect to unlabeled data is small, we show how to combine supervised with self-supervised tasks in order to improve the learned representations as well as robustness to 1-pixel attacks
Abstract: Self-supervised learning pre-training exhibited excellent performance on feature learning by using only unlabeled examples. Still, it is not clear how different self-supervised tasks perform under distinct image domains and there are still training issues to be tackled under scenarios of limited labeled data. We investigate two self-supervised tasks: rotation and Barlow Twins, on three distinct image domains, exploring a combination of supervised and self-supervised learning. Our motivation is to work on scenarios where the proportion of labeled data with respect to unlabeled data is small, as well as investigate the model's robustness to 1-pixel attacks. The models that combine supervised with self-supervised tasks can take advantage of the unlabeled data to improve the learned representation in terms of the linear discrimination, as well as allowing learning even under attack.
0 Replies

Loading