FINE-TUNING MULTILINGUAL PRETRAINED AFRICAN LANGUAGE MODELSDownload PDF

Published: 03 Mar 2023, Last Modified: 02 May 2023AfricaNLP 2023Readers: Everyone
Keywords: low resource languages, language models, finetuning
Abstract: With the recent increase in low-resource African language text corpora , there have been advancements which have led to development of multilingual pre-trained language models (PLMs), based on African languages. These PLMS include AfriBerta \citep{ogueji2021-afriberta}, Afro-XLMR \citep{alabi-etal-2022-adapting-afro-xlmr} and AfroLM \citep{afrolm} , which perform significantly well. The downstream tasks of these models range from text classification , name-entity-recognition and sentiment analysis. By exploring the idea of fine-tuning the different PLMs, these models can be trained on different African language datasets. This could lead to multilingual models that can perform well on the new data for the required downstream task of classification. This leads to the question we are attempting to answer: Can these PLMs be fine-tuned to perform similarly well on different African language data?
0 Replies

Loading