Multiple Sources are Better Than One: Incorporating External Knowledge in Low-Resource Glossing

ACL ARR 2024 June Submission1982 Authors

15 Jun 2024 (modified: 11 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this paper, we address the data scarcity problem in automatic data-driven glossing for low-resource languages by coordinating multiple sources of linguistic expertise. We supplement models with translations at both the token and sentence level as well as leverage the extensive linguistic capability of modern LLMs. Our enhancements lead to an average absolute improvement of 5%-points in word-level accuracy over the previous state of the art on a typologically diverse dataset spanning six low-resource languages. The improvements are particularly noticeable for the lowest-resourced language Gitksan, where we achieve a 10%-point improvement. Furthermore, in a simulated ultra-low resource setting for the same six languages, training on fewer than 100 glossed sentences, we establish an average 10%-point improvement in word-level accuracy over the previous state-of-the-art system.
Paper Type: Long
Research Area: Phonology, Morphology and Word Segmentation
Research Area Keywords: Efficient/Low-Resource Methods for NLP, Phonology, Morphology, and Word Segmentation
Contribution Types: Approaches to low-resource settings
Languages Studied: Arapaho, Gitksan, Lezgi, Natugu, Tsez, Uspanteko
Submission Number: 1982
Loading