Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained ModelsDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: The success of multilingual pre-trained models in transferring knowledge cross-lingually is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision. However, it remains unclear how. In this work, we conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar. In particular, we investigate whether morphosyntactic information is encoded in the same subset of neurons in different languages. We conduct the first large-scale empirical study over 43 typologically diverse languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe. Our findings show that the cross-lingual overlap between neurons is significant, but its extent may vary across categories and depends on language proximity and pre-training data size.
0 Replies

Loading