Cross-Linguistic Examination of Transfer Learning in Machine Translation for Low-resourced Languages

ACL ARR 2025 February Submission619 Authors

10 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

This study investigates the effectiveness of transfer learning in machine translation across diverse linguistic families by evaluating five distinct language pairs. Leveraging pre-trained models on high-resource languages, these models were fine-tuned on low-resource languages, examining variations in hyperparameters such as learning rate, batch size, number of epochs, and weight decay. The research encompasses language pairs from different linguistic backgrounds: Semitic (Modern Standard Arabic - Levantine Arabic), Bantu (Hausa - Zulu), Romance (Spanish - Catalan), Slavic (Slovakian - Macedonian), and language isolates (Eastern Armenian - Western Armenian). Results demonstrate that transfer learning is effective across different language families, although the impact of hyperparameters varies. A moderate batch size (e.g., 32) is generally more effective, while very high learning rates can disrupt model training. The study highlights the universality of transfer learning in multilingual contexts and suggests that consistent hyperparameter settings can simplify and enhance the efficiency of multilingual model training.

Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation, Transfer Learning, Hyperparameters, Multilingual NLP, Low-Resource Languages
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Western Armenian, Catalan, Levantine Arabic, Macedonian, Zulu
Submission Number: 619
Loading