SimMER: Simple Maximization of Entropy and Rank for Self-supervised Representation LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: self-supervised learning, representation learning, computer vision, image classification
Abstract: Consistency regularization, referring to enforcing consistency across a model's responses to different views of the same input, is widely used for self-supervised image representation learning. However, consistency regularization can be trivially achieved by collapsing the model into a constant mapping. To prevent this, existing methods often use negative pairs (contrastive learning) or ad hoc architecture constructs. Inspired by SimSiam's alternating optimization hypothesis, we propose a novel optimization target, SimMER, for self-supervised learning that explicitly avoids model collapse by balancing consistency (total variance minimization) and entropy of inputs' representations (entropy maximization). Combining consistency regularization with entropy maximization alone, the method can achieve performance on par with the state-of-the-art. Furthermore, we introduce an linear independence loss to further increase the performance by removing linear dependency along the feature dimension of the batch representation matrix (rank maximization), which has both anticollapsing and redundancy removal effects. With both entropy and rank maximization, our method surpasses the state-of-the-art on CIFAR-10 and Mini-ImageNet under the standard linear evaluation protocol.
One-sentence Summary: We propose a novel self-supervised learning algorithm that avoids the model collapse through an explicit trade-off among consistency regularization, entropy, and linear independence without negative pairs or ad hoc architectural constructs.
5 Replies

Loading