LLMs for Extremely Low-Resource Finno-Ugric Languages

ACL ARR 2024 June Submission4753 Authors

16 Jun 2024 (modified: 15 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The advancement of large language models (LLMs) has predominantly focused on high-resource languages, leaving low-resource languages, such as those in the Finno-Ugric family, significantly underrepresented. This paper addresses this gap by focusing on Võro, Livonian, and Komi. We cover almost the entire cycle of LLM creation, from data collection to instruction tuning and evaluation. Our contributions include developing multilingual base and instruction-tuned models; creating evaluation benchmarks, including the SMUGRI-MT-Bench multi-turn conversational benchmark; and conducting human evaluation. We intend for this work to promote linguistic diversity, ensuring that lesser-resourced languages can benefit from advancements in NLP.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: multilingualism, cross-lingual transfer, less-resourced languages, resources for less-resourced languages, endangered languages, multilingual pre-training, multilingual benchmarks, multilingual evaluation, dialects and language varieties
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: Võro, Livonian, Komi, Estonian
Submission Number: 4753
Loading