Neuron Specialization: Leveraging Intrinsic Task Modularity for Multilingual Machine Translation

ACL ARR 2024 June Submission1531 Authors

14 Jun 2024 (modified: 10 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. Language-specific modeling methods show promise in reducing interference. However, they often rely on heuristics to distribute capacity and struggle to foster cross-lingual transfer via isolated modules. In this paper, we explore intrinsic task modularity within multilingual networks and leverage these observations to circumvent interference under multilingual translation. We show that neurons in the feed-forward layers tend to be activated in a language-specific manner. Meanwhile, these specialized neurons exhibit structural overlaps that reflect language proximity, which progress across layers. Based on these findings, we propose Neuron Specialization, an approach that identifies specialized neurons to modularize feed-forward layers and then continuously updates them through sparse networks. Extensive experiments show that our approach achieves consistent performance gains over strong baselines with additional analyses demonstrating reduced interference and increased knowledge transfer.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation,Multilingualism and Cross-Lingual NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English,German,Dutch,French,Spanish,Russian,Czech,Hindi,Bengali,Arabic,Hebrew,Swedish,Danish,Italian,Portuguese,Polish,Bulgarian,Kannada,Marathi,Maltese,Hausa,Afrikaans,Luxembourgish,Romanian,Occitan,Ukrainian,Serbian,Sindhi,Gujarati,Tigrinya,Amharic
Submission Number: 1531
Loading