Keywords: Low-resource, languages, transfer learning, named entity recognition, Tshivenda
Abstract: Named Entity Recognition (NER) plays a vital role in various Natural Language Processing tasks such as information retrieval, text classification, and question answering. However, NER can be challenging, especially in low-resource languages with limited annotated datasets and tools. This paper adds to the effort of addressing these challenges by introducing MphayaNER, the first Tshivenda NER corpus in the news domain. We establish NER baselines by fine-tuning state-of-the-art models on MphayaNER. The study also explores zero-shot transfer between Tshivenda and other related Bantu languages, with Setswana, chiShona and Kiswahili showing the best results. Augmenting MphayaNER with Setwana data was also found to improve model performance significantly. Both MphayaNER and the baseline models are made publicly available.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/mphayaner-named-entity-recognition-for/code)
0 Replies
Loading