Keywords: Contrastive Learning, Invariant Space
Abstract: Recent years have witnessed the effectiveness of contrastive learning in obtaining
the representation of dataset that is useful in interpretation and downstream tasks.
However, the mechanism by which the contrastive learning succeeds in this feat
has not been fully uncovered. In this paper, we show that contrastive learning can
uncover a fine decomposition of the dataset into a set of latent features defined by
augmentations, and that such a decomposition can be achieved just by changing
the metric in the simCLR-type loss.
3 Replies
Loading