Improving Multimodal Learning Balance and Sufficiency through Data Remixing

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Abstract: Different modalities hold considerable gaps in optimization trajectories, including speeds and paths, which lead to *modality laziness* and *modality clash* when jointly training multimodal models, resulting in insufficient and imbalanced multimodal learning. Existing methods focus on enforcing the weak modality by adding modality-specific optimization objectives, aligning their optimization speeds, or decomposing multimodal learning to enhance unimodal learning. These methods fail to achieve both unimodal sufficiency and multimodal balance. In this paper, we, for the first time, address both concerns by proposing multimodal Data Remixing, including decoupling multimodal data and filtering hard samples for each modality to mitigate modality imbalance; and then batch-level reassembling to align the gradient directions and avoid cross-modal interference, thus enhancing unimodal learning sufficiency. Experimental results demonstrate that our method can be seamlessly integrated with existing approaches, improving accuracy by approximately **6.50\%$\uparrow$** on CREMAD and **3.41\%$\uparrow$** on Kinetic-Sounds, without training set expansion or additional computational overhead during inference. The source code is available at Data Remixing.
Lay Summary: The *modality imbalance* problem refers to the phenomenon where, during multimodal joint training, the strong modality tends to suppress the learning of the weak one. However, in our study, we observe that the weak modality can also interfere with the learning of the strong one. To investigate this, we delve into the phenomenon and propose the concept of *modality clash*. To address the issues, we introduce an adaptive data allocation mechanism called Data Remixing. This method decouples multimodal inputs by evaluating each sample at the sample level and assigning it to the most appropriate modality for training. This ensures more balanced learning across modalities. Additionally, it reassembles unimodal inputs at the batch level to further mitigate cross-modal interference. Through extensive experiments, we demonstrate that our approach performs well on multimodal co-decision tasks, significantly enhancing both unimodal and multimodal representation capabilities.
Primary Area: General Machine Learning->Representation Learning
Keywords: Multimodal Learning, Machine Learning, Representation Learning
Submission Number: 11027
Loading