Contrastive Quant: Quantization Makes Stronger Contrastive LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Contrastive learning, quantization
Abstract: Contrastive learning, which learns visual representations by enforcing feature consistency under different augmented views, has emerged as one of the most effective unsupervised learning methods. In this work, we explore contrastive learning from a new perspective, inspired by the recent works showing that properly designed weight perturbations or quantization help the models learn a smoother loss landscape. Interestingly, we find that quantization, when properly engineered, can enhance the effectiveness of contrastive learning. To this end, we propose a novel contrastive learning framework, dubbed Contrastive Quant, to encourage the feature consistency under both (1) differently augmented inputs via various data transformations and (2) differently augmented weights/activations via various quantization levels, where the feature consistency under injected noises via quantization can be viewed as augmentations on both model weights and intermediate activations as a complement to the input augmentations. Extensive experiments, built on top of two state-of-the-art contrastive learning methods SimCLR and BYOL, show that Contrastive Quant consistently improves the learned visual representation, especially with limited labeled data under semi-supervised scenarios. For example, our Contrastive Quant achieves a 8.69% and 10.27% higher accuracy on ResNet-18 and ResNet-34, respectively, on ImageNet when fine-tuning with 10% labeled data. We believe this work has opened up a new perspective for future contrastive learning innovations. All codes will be released upon acceptance.
One-sentence Summary: We propose the Contrastive Quant framework to further encourage the feature consistency under differently augmented weights/activations via various quantization levels on top of existing contrastive learning methods.
5 Replies

Loading