Africa-Centric Self-Supervised Pretraining for Multilingual Speech Representation in a Sub-Saharan Context

ICLR 2024 Workshop AfricaNLP Submission21 Authors

Published: 03 Mar 2024, Last Modified: 10 May 2024AfricaNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-supervised speech representation, African languages, Multilingual ASR, HuBERT
TL;DR: First publicly shared Self-supervised multilingual speech model trained exclusively on sub-Saharan spoken data
Abstract: We present the first self-supervised multilingual speech model trained exclusively on African speech. The model learned from nearly 60 000 hours of unlabeled speech segments in 21 languages and dialects spoken in sub-Saharan Africa. On the SSA subset of the FLEURS-102 dataset, our approach based on a HuBERT$_{base}$ (0.09B) architecture shows competitive results, for ASR downstream task, compared to the w2v-bert-51 (0.6B) pre-trained model proposed in the FLEURS benchmark, while being more efficient by using 7x less data and 6x less parameters. Furthermore, in the context of a LID downstream task, our approach outperforms FLEURS baselines accuracy by over 22\%.
Submission Number: 21
Loading