Towards Good Practices in Self-Supervised Representation LearningDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: self-supervised learning, unsupervised learning, deep learning, neural networks, good practices
Abstract: Self-supervised representation learning has seen remarkable progress in the last few years. More recently, contrastive instance learning has shown impressive results compared to its supervised learning counterparts, in particular on downstream tasks like image classification and object detection. However, even with the ever increased interest in contrastive instance learning, it is still largely unclear why these methods work so well. In this paper, we aim to unravel some of the mysteries behind their success, which are the good practices. In particular, we investigate why the nonlinear projection head is essential, why instance discrimination does not suffer from strong data augmentation, and if large amounts of negative samples are required during contrastive loss computation. Through an extensive empirical analysis, we hope to not only provide insights but also lay out a set of best practices that led to the success of recent work in self-supervised representation learning.
One-sentence Summary: Highlight and justify the effectiveness of good practices in recent self-supervised learning methods.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=rIv-HWx8k
5 Replies

Loading