Task-Adaptive Pre-Training for Boosting Learning With Noisy Labels: A Study on Text Classification for African LanguagesDownload PDF

Published: 08 Apr 2022, Last Modified: 05 May 2023AfricaNLP 2022Readers: Everyone
Abstract: For high-resource languages like English, text classification is a well-studied task. The performance of modern NLP models easily achieves an accuracy of more than 90\% in many standard datasets for text classification in English \citep{xie2019unsupervised, Yang2019, Zaheer2020}. However, text classification in low-resource languages is still challenging due to the lack of annotated data. Although methods like weak supervision and crowdsourcing can help ease the annotation bottleneck, the annotations obtained by these methods contain label noise. Models trained with label noise may not generalize well. To this end, a variety of noise-handling techniques have been proposed to alleviate the negative impact caused by the errors in the annotations (for extensive surveys see \citep{hedderich-etal-2021-survey, DBLP:journals/kbs/AlganU21}). In this work, we experiment with a group of standard noisy-handling methods on text classification tasks with noisy labels. We study both simulated noise and realistic noise induced by weak supervision. Moreover, we find task-adaptive pre-training techniques \citep{DBLP:conf/acl/GururanganMSLBD20} are beneficial for learning with noisy labels.
1 Reply

Loading