Consistent Assignment for Representation LearningDownload PDF

Feb 26, 2021 (edited Apr 25, 2021)EBM_WS@ICLR2021 PosterReaders: Everyone
  • Keywords: representation learning, contrastive learning, computer vision
  • TL;DR: An unsupervised method that combines contrastive learning with clustering to learn visual representations.
  • Abstract: We introduce Consistent Assignment for Representation Learning (CARL). An unsupervised learning method to learn visual representations by combining contrastive learning with deep clustering. By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes that serve as energy anchors to enforce different views of a given image to be assigned to the same prototype. Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion, using gradient descent without the necessity of performing offline clustering or using non-differentiable algorithms to solve the cluster assignment problem. CARL achieves comparable results with current state-of-the-art methods in the CIFAR-10, -100, and STL10 datasets.
1 Reply