Self-supervised Learning with Temporary Exact Solutions: Linear ProjectionDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 06 Nov 2023INDIN 2023Readers: Everyone
Abstract: Self-supervised learning has emerged as a promising method for training neural networks without needing annotated data. In this paper, we present a self-supervised learning method for training, not limited to but especially visual transformers that are able to learn meaningful representations of images and videos without requiring large amounts of labeled data. Our method is based on using exact solutions of the representations that the model generates. It is shown that the model is able to learn useful features that can be later fine-tuned on industrial downstream tasks. We demonstrate the effectiveness of our method on a subset of the Universal Image Embeddings 130k dataset [1], a private industrial Pill Identification dataset, and standard Cifar-10 dataset [20]. We show that our method outperforms solid baselines which are BYOL [2] and Barlow Twins [3] while using fewer parameters and resources. We show the capability of the trained model on a Deep Metric Learning task by comparing the Swin Transformer [4] backbones that are trained with our method, BYOL [2], and Barlow Twins [3]. The results also show that the proposed method achieves higher accuracy than others in pre-training and fine-tuning processes with fewer parameters. GitHub: https://github.com/rootvisionai/solo-learn.
0 Replies

Loading