Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmarkDownload PDF

20 Aug 2021, 07:31 (edited 06 Jan 2022)NeurIPS 2021 Datasets and Benchmarks Track (Round 2)Readers: Everyone
  • Keywords: French language, speech benchmark, ssl, self-supervised learning, asr, automatic speech recognition, slu, spoken language understanding, speech translation, speech-to-text, emotion recognition
  • Abstract: Self-Supervised Learning (SSL) has yielded remarkable improvements in many different domains including computer vision, natural language processing and speech processing by leveraging large amounts of unlabeled data. In the specific context of speech, however, and despite promising results, there exists a clear lack of standardization in the evaluation process for comprehensive comparisons of these models. This issue gets even worse with the investigation of SSL approaches for other languages than English. We present LeBenchmark, an open-source and reproducible framework for assessing SSL from French speech data. It includes documented, large-scale and heterogeneous corpora, seven pretrained SSL wav2vec 2.0 models shared with the community, and a clear evaluation protocol made of four downstream tasks along with their scoring scripts: automatic speech recognition, spoken language understanding, automatic speech translation and automatic emotion recognition. For the first time, SSL models are analyzed and compared on the latter domains both from a task-agnostic (i.e. frozen) and task-specific (i.e. fine-tuned w.r.t the downstream task) perspectives. We report state-of-the-art performance on most considered French tasks and provide a readable evaluation set-up for the development of future SSL models for speech processing.
  • Supplementary Material: pdf
  • URL: http://lebenchmark.com/
  • Contribution Process Agreement: Yes
  • Dataset Url: http://lebenchmark.com
  • License: MIT License
  • Author Statement: Yes
10 Replies