Keywords: Causal Representation Learning, Multilingual Language Models, Representation Disentanglement, Low-resource Language Modeling, Subspace Probing, Generative Modeling
TL;DR: Reducing representational entanglement with high-resource languages improves generative modeling for related low-resource varieties through causal subspace interventions.
Abstract: It is often assumed that aligning low-resource varieties with high-resource standards improves modeling in multilingual Large Language Models (LLMs). We challenge this view with the first causal study showing that excessive representational entanglement with dominant varieties can reduce generative quality. We introduce an online variational probing method that continuously estimates the subspace of a dominant variety during fine-tuning on a generative task and penalizes it to reduce its span. Across six language families we find that reducing alignment consistently boosts low-resource translation performance, including +11.7 ChrF++ for European Portuguese, +5.3 for Indonesian, +4.6 for Kven Finnish, and +2.7 for Low German. In Arabic, several dialects improve by up to +4.3 ChrF++ despite sharp drops for cross-lingual tasks such as translation to MSA, English, or French, suggesting that the effect extends beyond simple cross-lingual alignment. Alongside these causal results, we present qualitative and observational evidence from information-theoretic and geometric probing that further supports our hypothesis. Together, our findings establish that disentangling high-resource subspaces can unlock capacity for related low-resource varieties and provide practical tools for controlling representational allocation in multilingual LLMs.
Primary Area: interpretability and explainable AI
Submission Number: 17848
Loading