Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language TranslationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We explored different Parameter-efficient fine-tuning architectures' performance by comparing in-domain test, out-of-domain test, and training time.
Abstract: Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in low-resource language (LRL) Neural Machine Translation to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 architectures using the SacreBLEU score. We showed that the Houlsby+Inversion adapter outperforms the baseline, proving the effectiveness of PEFT methods for LRL translation.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: Sinhala, Tamil, Hindi, Gujarati
0 Replies

Loading