CACARA: Cross-Modal Alignment Leveraging a Text-Centric Approach for Cost-Effective Multimodal and Multilingual Learning

ACL ARR 2025 February Submission7485 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As deep learning models evolve, new applications and challenges are rapidly emerging. Tasks that once relied on a single modality -- such as text, images, or audio -- are now enriched by seamless interactions between multimodal data. These connections bridge information gaps: an image can visually materialize a text, while audio can add context to an image. Researchers have developed numerous multimodal models, but most rely on resource-intensive training across multiple modalities. Similarly, extending these models to new languages often follows the same resource-heavy training strategy. In this work, we propose a multimodal and multilingual architecture, CACARA, trained through emergent alignment learning, enabling the seamless integration of new modalities into an existing bimodal/multimodal model without requiring full retraining. Likewise, our approach extends the model's linguistic capabilities while preserving previously learned knowledge. Multimodal and multilingual properties emerge through alignment learning, leveraging prior training to enhance and synchronize multiple modalities and languages. Our strategy achieves up to a 14.24 percentage point (pp) improvement in R@1 audio-to-text retrieval, outperforming state-of-the-art multimodal models -- all without the heavy computational cost of retraining across every modality and language.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Multimodal, Multilingual, Emergent Alignment
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English, Portuguese, Spanish, French, German, Chinese, Japanese, Russian, Turkish, Hindi, Arabic, Swahili
Submission Number: 7485
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview