Exploring Training Time Modality Incompleteness and Learning from Diverse Modalities

ICLR 2026 Conference Submission20301 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Training time modality incompleteness, multimodal learning, depression detection
TL;DR: A two-stage framework designed to address training-time modality incompleteness without requiring co-occurring samples
Abstract: Multimodal learning benefits from the complementary signals across different data sources, but real-world scenarios often encounter missing modalities, particularly during training. Existing approaches focus on addressing this issue at test time and typically rely on fully co-occurring multimodal data, which can be difficult and costly to collect. We propose a two-stage framework designed to address training-time modality incompleteness without requiring co-occurring samples. The first stage, Data Fusing with Label-guided Mapping (DFLM), constructs a pseudo-multimodal dataset by aligning user data across modalities using supervised contrastive learning guided by shared labels. The second stage, Cooperative Cross-attention Multimodal Transformer (CCAMT), learns from the constructed dataset using a cross-attention mechanism that supports both modality-specific learning and cross-modal interaction with drastically different modalities. An extensive evaluation on three popular datasets (Multimodal Twitter, Multimodal Reddit, and StudentLife) demonstrates that CCAMT significantly outperforms the best-published baselines across all metrics. CCAMT achieves an impressive 96.5% accuracy, significantly outperforming single-modal baselines by up to 10.5% in accuracy. The physical activity data increases the model's accuracy by 2.8%. It also significantly outperforms the state-of-the-art time2vec multimodal transformer by 3% in accuracy, 2.9% in F1 score, 0.9% in precision, and 2.8% in recall. It outperforms other strong multimodal baselines by up to a 7.7% increase in accuracy and a 6.8% improvement in F1 score. Our robustness analysis with imbalanced data evaluation shows that CCAMT can achieve 74.2% accuracy with only 10% of data, significantly outperforming time2vec Transformer (at 47.3%) and SetTransformer (at 50.2%). The Edge deployment evaluation also shows that CCAMT's encoder configuration is up to 83.04% faster than other configurations on an Nvidia Jetson device.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 20301
Loading