Investigating Why Contrastive Learning Benefits Robustness against Label NoiseDownload PDF

26 May 2022, 20:09 (modified: 23 Jul 2022, 02:24)ICML 2022 Pre-training WorkshopReaders: Everyone
Keywords: robustness, contrastive learning, pretraining
Abstract: Self-supervised contrastive learning has recently been shown to be very effective in preventing deep networks from overfitting noisy labels. Despite its empirical success, the theoretical understanding of the effect of contrastive learning on boosting robustness is very limited. In this work, we rigorously prove that learned the representation matrix has certain desirable properties in terms its SVD that benefit robustness against label noise. We further show that the low-rank structure of the Jacobian of deep networks pre-trained with contrastive learning allows them to achieve a superior performance initially, when fine-tuned on noisy labels. Finally, we demonstrate that the initial robustness provided by contrastive learning enables robust training methods to achieve state-of-the-art performance under extreme noise levels.
0 Replies