[Re] Intriguing Properties of Contrastive LossesDownload PDF

Published: 02 Aug 2023, Last Modified: 02 Aug 2023MLRC 2022Readers: Everyone
Keywords: self-supervised learning, contrastive learning, contrastive loss, simclr, features learning, semantic learning
Abstract: Reproducibility Summary Scope of Reproducibility — In 2021, Chen et al. [1] studied three properties of contrastive learning. One of the results from the paper shows that the instance‐based objective widely used in existing contrastive learning methods can learn meaningful local features (e.g. dogs’ facial components, as shown in Figure 9) despite operating on global image representation. In this paper, we validate this property, we perform experiments beyond the findings of Chen et al. [1], and we evaluate the effect of the deep projection head on the accuracy when using different batch sizes for the linear evaluation of SimCLR. Methodology — We implemented the project with Python using PyTorch as deep learning library, while the original paper’s repository3 provides three Jupyter Notebooks using Tensorflow. In particular, the original paper’s repository does not provide any code for the experiments we reproduced. Therefore, we fully re‐implemented the proposed methods by following the description of the original paper. We used the pre‐trained SimCLR models provided by the authors’ repository. Results — The obtained linear evaluation accuracies differ in a range between 0.19% and 2.05%, while the ones in the original paper differ from 0.20% to 0.80%. Nonetheless, we believe that our results support that the differences in top‐1 accuracy among different batch sizes are minimal because of different choices of the dataset, base encoder, and batch sizes, and also because the range substantially increases when the projection head is not deep. All the other experiments support the original and the newly tested claims. What was easy — The paper of Chen et al. [1] is well‐written, which made it easy to comprehend. In addition to that, checkpoints of the models are provided and therefore it was relatively easy to reproduce the considered experiments. What was difficult — We had issues reproducing the linear evaluation results of SimCLR due to our limited computational resources. So, we trained a smaller base encoder for fewer epochs compared to the original paper. We also had some doubts about the used version of SimCLR and some other implementation details because the original repository3 provides checkpoints for both versions and it does not provide code of the experiments we reproduced. Communication with original authors — We communicated with the first author of the original paper (Ting Chen) twice by email for doubts, and we promptly received replies.
Paper Url: https://proceedings.neurips.cc/paper/2021/hash/628f16b29939d1b060af49f66ae0f7f8-Abstract.html
Paper Venue: Other venue (not in list)
Venue Name: NeurIPS 2021
Confirmation: The report pdf is generated from the provided camera ready Google Colab script, The report metadata is verified from the camera ready Google Colab script, The report contains correct author information., The report contains link to code and SWH metadata., The report follows the ReScience latex style guides as in the Reproducibility Report Template (https://paperswithcode.com/rc2022/registration)., The report contains the Reproducibility Summary in the first page., The latex .zip file is verified from the camera ready Google Colab script
Latex: zip
Journal: ReScience Volume 9 Issue 2 Article 6
Doi: https://www.doi.org/10.5281/zenodo.8173662
Code: https://archive.softwareheritage.org/swh:1:dir:35a398e4df2fda2f4241886f39c193b5a53a3e4c
0 Replies

Loading