Unraveling Cross-Lingual Dynamics in Language Models: Independent, Shared and Transferred Factual KnowledgeDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Acquiring factual knowledge for low-resource languages within multilingual language models (ML-LMs) presents a significant challenge due to the real-world entities in the training data. It underscores the need for transferring knowledge from resource-rich languages to resource-poor languages, namely cross-lingual transfer. However, the effectiveness and extent of cross-lingual transfer in ML-LMs for factual knowledge remain largely unexplored. To address this research gap, we use evaluation results from the multilingual factual knowledge probing dataset - mLAMA, to conduct a neuron-level inspection of how ML-LMs (here, multilingual BERT (mBERT) represent facts in different languages. Additionally, we analyze the knowledge source (Wikipedia) to identify the various ways in which the ML-LMs learn specific facts. As a result, we identify three types of knowledge learning and representation patterns in the ML-LMs: language-independent, cross-lingual shared, and transferred, and introduce methods to differentiate them.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: Malay,Catalan,Korean,Hebrew,Finnish,Irish,Georgian,English,Thai,Dutch,Chinese,Japanese,Basque,Danish,Portuguese,Russian,French,Serbian,Estonian,Swedish,Armenian,Welsh,Albanian,Italian,Hindi,Croatian,Spanish,Hungarian,Bulgarian,Tamil,Slovenian,Bangla,German,Indonesian,Ukrainian,Belarusian,Cebuano,Greek,Persian,Polish,Azerbaijani,Arabic,Latin,Galician,Lithuanian,Czech,Slovak,Latvian,Turkish,Afrikaans,Vietnamese,Urdu,Romanian
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading