The Privileged Students: On the Value of Initialization in Multilingual Knowledge Distillation

ACL ARR 2024 June Submission3820 Authors

16 Jun 2024 (modified: 06 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge distillation (KD) has proven to be a successful strategy to improve the performance of a smaller model in many NLP tasks. However, most of the work in KD only explores monolingual scenarios. In this paper, we investigate the value of KD in multilingual settings. We find the significance of KD and model initialization by analyzing how well the student model acquires multilingual knowledge from the teacher model. Our proposed method emphasizes copying the teacher model's weights directly to the student model to enhance initialization. Our finding shows that model initialization using copy-weight from the fine-tuned teacher contributes the most compared to the distillation process itself across various multilingual settings. Furthermore, we demonstrate that efficient weight initialization preserves multilingual capabilities even in low-resource scenarios.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: distillation, multilingual, analysis
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: Afrikaans, Amharic, Arabic , Azeri, Bengali , Catalan, Chinese, Danish, German, Greek, English , Spanish , Farsi, Finnish, French, Hebrew, Hungarian, Armenian, Indonesian , Icelandic, Italian , Japanese , Javanese , Georgian , Khmer , Korean , Latvian , Mongolian , Malay , Burmese , Norwegian , Dutch, Polish , Portuguese , Romanian , Russian, Slovanian , Albanian , Swedish , Swahili , Hindi , Kannada , Malayalam , Tamil , Telugu , Thai , Tagalog , Turkish , Urdu , Vietnamese , Welsh
Submission Number: 3820
Loading