Towards a Unified Framework of Contrastive Learning for Disentangled Representations

Published: 21 Sept 2023, Last Modified: 08 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Disentanglement, Contrastive Learning, Identifiability, Representation Learning, Nonlinear ICA
TL;DR: We provide unified theoretical guarantees for disentanglement for a broader family of contrastive methods and prove identifiability of the true latents for four contrastive losses, without imposing common independence assumptions.
Abstract: Contrastive learning has recently emerged as a promising approach for learning data representations that discover and disentangle the explanatory factors of the data. Previous analyses of such approaches have largely focused on individual contrastive losses, such as noise-contrastive estimation (NCE) and InfoNCE, and rely on specific assumptions about the data generating process. This paper extends the theoretical guarantees for disentanglement to a broader family of contrastive methods, while also relaxing the assumptions about the data distribution. Specifically, we prove identifiability of the true latents for four contrastive losses studied in this paper, without imposing common independence assumptions. The theoretical findings are validated on several benchmark datasets. Finally, practical limitations of these methods are also investigated.
Supplementary Material: zip
Submission Number: 7057
Loading