DIALECTBENCH: An NLP Benchmark for Dialects, Varieties, and Closely-Related Languages

Published: 01 Aug 2024, Last Modified: 22 Dec 2025Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)EveryoneRevisionsCC BY-NC-SA 4.0
Abstract: Language technologies should be judged on their usefulness in real-world use cases. An often overlooked aspect in natural language processing (NLP) research and evaluation is language variation in the form of non-standard dialects or language varieties (hereafter, varieties). Most NLP benchmarks are limited to standard language varieties. To fill this gap, we propose DIALECTBENCH, the first-ever large-scale benchmark for NLP on varieties, which aggregates an extensive set of task-varied varieties datasets (10 text-level tasks covering 281 varieties). This allows for a comprehensive evaluation of NLP system performance on different varieties. We provide substantial proof of performance disparities between standard and non-standard language varieties, and we also identify language clusters with larger performance divergence across tasks.We believe DIALECTBENCH provides a comprehensive view of the current state of NLP for varieties and one step towards advancing it further.
Loading