Keywords: contrastive learning, OOD generalization, representation learning, deep ensembles
TL;DR: We improved the generalizability of contrastive pre-trained encoders to OOD data by taking an ensemble of encoders in the embedding space.
Abstract: The quality of self-supervised pre-trained embeddings on out-of-distribution (OOD) data declines without fine-tuning. A straightforward and simple approach to improving the generalizability of pre-trained representation quality to OOD data is the use of deep ensembles. However, obtaining an ensemble of encoders in the embedding space with only unlabeled data remains an unsolved problem. In this paper, we first perform a theoretical analysis that reveals the relationship between individual hyperspherical embedding spaces in an ensemble. We then design a novel and principled embedding-space ensemble method that aligns these embedding spaces in an unsupervised way. Experimental results on the MNIST dataset show that our embedding-space ensemble method improves pre-trained embedding quality on in-distribution and OOD data compared to single encoders.
Submission Number: 68
Loading