Exploring Alignment in Shared Cross-Lingual Spaces

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: Multilinguality, Interpretability, representation analysis
TL;DR: we explore shared cross lingual latent spaces in transformer models based on concept alignment and overlap
Abstract: Despite their remarkable ability to capture linguistic nuances across diverse languages, questions persist regarding the degree of alignment between languages in multilingual embeddings. Drawing inspiration from research on high-dimensional representations in neural language models, we employ clustering to uncover latent concepts within multilingual models. Our analysis focuses on quantifying the alignment and overlap of these concepts across various languages within the latent space. To this end, we introduce two metrics CALIGN and COLAP aimed at quantifying these aspects, enabling a deeper exploration of multilingual embeddings. Our study encompasses three multilingual models (mT5, mBERT, and XLM-RoBERTa) and three downstream tasks (Machine Translation, Named Entity Recognition, and Sentiment Analysis). Key findings from our research include: i) deeper layers in the network demonstrate increased cross-lingual alignment attributed to the presence of language-agnostic concepts, ii) fine-tuning of the models enhances alignment within the latent space, and iii) such task-specific calibration helps in explaining the emergence of zero-shot capabilities in the models.
Submission Number: 36
Loading