An Information Criterion for Controlled Disentanglement of Multimodal Data

Published: 10 Oct 2024, Last Modified: 30 Oct 2024UniRepsEveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract Track
Keywords: Multimodal Representation Learning, Disentanglement, Self-Supervised Learning, Information Theory
Abstract: Multimodal representation learning seeks to relate and decompose information available in multiple modalities. By disentangling modality-specific information from information that is shared across modalities, we can improve interpretability and robustness and enable tasks like counterfactual generation. However, separating these components is challenging due to their deep entanglement in real-world data. We propose $\textbf{Disentangled}$ $\textbf{S}$elf-$\textbf{S}$upervised $\textbf{L}$earning (DisentangledSSL), a novel self-supervised approach that effectively learns disentangled representations, even when the so-called $\textit{Minimum Necessary Information}$ (MNI) point is not achievable. It outperforms baselines on multiple synthetic and real-world datasets, excelling in downstream tasks, including prediction tasks for vision-language data, and molecule-phenotype retrieval for biological data.
Submission Number: 56
Loading