BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in BanglaDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: In this paper, we introduce 'BanglaBERT', a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature. To pretrain BanglaBERT, we collect 27.5 GB of Bangla pretraining data (dubbed 'Bangla2B+') by crawling 110 popular Bangla sites. We introduce a new downstream task dataset on Natural Language Inference (NLI) and benchmark on four diverse NLU tasks covering text classification, sequence labeling, and span prediction. In the process, we bring them under the first-ever Bangla Language Understanding Evaluation (BLUE) benchmark. BanglaBERT achieves state-of-the-art results outperforming multilingual and monolingual models. We will make the BanglaBERT model, the new datasets, and a leaderboard publicly available to advance Bangla NLP.
0 Replies

Loading