A Generative Augmentation Framework for Contrastive Learning

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Generative Modeling, Contrastive Learning, Self-Supervised Learning, Computer Vision, Generative Adversarial Networks, Transformers, Image Augmentation
TL;DR: Using generative models to augment images in contrastive learning increases downstream accuracy.
Abstract: Contrastive learning has achieved unprecedented levels of accuracy on computer vision applications in recent years. However, in this time, the image augmentations used in these frameworks have remained, for the most part, unchanged. We propose a new augmentation strategy, GenCL, a Generative Augmentation Framework for Contrastive Learning, which utilizes generative modeling to augment images for forming positive pairs. Unlike geometry and color augmentations, GenCL is able to change high-level visual features in images, such as the background, positioning, and color schemas of objects. Our results show that adding these generative augmentations to the suite of augmentations typically used in contrastive learning significantly increases downstream accuracy. In our work, we (1) outline the neural network architecture used in GenCL, (2) use ablation studies to optimize the hyperparameters used in our generative augmentations, and (3) provide a cost-benefit analysis of our implementation in a contrastive learning setting. With these findings, we show that leveraging generative models can significantly increase the performance of contrastive learning on self-supervised learning benchmarks, providing a new avenue for future contrastive learning research.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2021
Loading