i-Mix: A Strategy for Regularizing Contrastive Representation LearningDownload PDF

28 Sep 2020 (modified: 14 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: self-supervised learning, unsupervised representation learning, contrastive representation learning, data augmentation, MixUp
  • Abstract: Contrastive representation learning has shown to be an effective way of learning representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective regularization strategy for improving contrastive representation learning in both vision and non-vision domains. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of self-supervised representations across domains, resulting in significant performance gains on downstream tasks. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes. The code will be released.
  • One-sentence Summary: We propose i-Mix, a simple yet effective strategy for regularizing contrastive representation learning in both vision and non-vision domains.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
14 Replies

Loading