Good Fences Make Good Neighbours

Published: 31 Jul 2023, Last Modified: 31 Jul 2023VIPriors 2023 OralPosterTBDEveryoneRevisionsBibTeX
Keywords: Contrastive learning, Neighbour contrastive learning, Latent data augmentations, Generating latent representations, Good Neighbours, Visual priors
TL;DR: Potential Bad Neighbour identification and replacement via latent space augmentations for better neighbour contrastive learning
Abstract: Neighbour contrastive learning enhances the common contrastive learning methods by introducing neighbour representations to the training of pretext tasks. These algorithms are highly dependent on the retrieved neighbours and therefore require careful neighbour extraction in order to avoid learning irrelevant representations. Potential "Bad" Neighbours in contrastive tasks introduce representations that are less informative and, consequently, hold back the capacity of the model making it less useful as a good prior. In this work, we present a simple yet effective neighbour contrastive SSL framework, called "Mending Neighbours" which identifies potential bad neighbours and replaces them with a novel augmented representation called "Bridge Points". The Bridge Points are generated in the latent space by interpolating the neighbour and query representations in a completely unsupervised way. We highlight that by careful selection and replacement of neighbours, the model learns better representations. Our proposed method outperforms the most popular neighbour contrastive approach, NNCLR, on three different benchmark datasets in the linear evaluation downstream task. Finally, we perform an in-depth three-fold analysis (quantitative, qualitative and ablation) to further support the importance of proper neighbour selection in contrastive learning algorithms.
Submission Number: 21
Loading