Keywords: multilingual, web corpora
Abstract: The need for large text corpora has increased with the advent of pretrained language models and, in particular, the discovery of scaling laws for these models. Most available corpora have sufficient data only for languages with large dominant communities. However, there is no corpus available that (i) covers a wide range of minority languages; (ii) is generated by an open-source reproducible pipeline; and (iii) is rigorously cleaned from noise, making it trustworthy to use. We present GlotCC, a clean, document-level, 2TB general domain corpus derived from CommonCrawl, covering more than 1000 languages. We make GlotCC and the system used to generate it— including the pipeline, language identification model, and filters—available to the research community.
Corpus v. 1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1
Pipeline v. 3.0 https://github.com/cisnlp/GlotCC
Submission Number: 12
Loading