Towards Uniformity and Alignment for Multimodal Representation Learning

16 Sept 2025 (modified: 11 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal representation Learning, CLIP, Alignment
Abstract: Multimodal representation learning aims to construct a shared embedding space in which heterogeneous modalities are semantically aligned. Despite strong empirical results, InfoNCE-based objectives introduce inherent conflicts that yield distribution gaps across modalities. We identify and formally analyze two conflicts in the multimodal regime, both exacerbated as the number of modalities \(M\) increases: (i) an alignment–uniformity conflict, whereby uniform repulsion undermines positive-pair alignment, and (ii) an intra-alignment conflict stemming from the non-collinearity of multi-way positives. To address these issues, we propose a principled decoupling of alignment and uniformity. We then demonstrate a theoretical guarantee that our method mitigates the distribution gap by introducing a global Hölder divergence over multiple modality distributions. We show that our decoupled losses act as efficient proxies for minimizing this cross-modal divergence. Extensive experiments on retrieval and UnCLIP-style generation demonstrate consistent gains. Overall, this work provides a conflict-free recipe and theoretical guidance for multimodal learning that simultaneously supports discriminative and generative use cases without task-specific modules.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 7361
Loading