Interference Matrix: Quantifying Cross-Lingual Interference in Transformer Encoders

Published: 2025, Last Modified: 05 Dec 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we present a comprehensive study of language interference in encoder-only Transformer models across 83 languages. We construct an interference matrix by training and evaluating small BERT-like models on all possible language pairs, providing a large-scale quantification of cross-lingual interference. Our analysis reveals that interference between languages is asymmetrical and that its patterns do not align with traditional linguistic characteristics, such as language family, nor with proxies like embedding similarity, but instead better relate to script. Finally, we demonstrate that the interference matrix effectively predicts performance on downstream tasks, serving as a tool to better design multilingual models to obtain optimal performance.
Loading