On the Impact of Cross-Domain Data on German Language Models

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Multilinguality and Linguistic Diversity
Submission Track 2: Machine Learning for NLP
Keywords: large language models, cross-domain datasets, data diversity, data quality, benchmark
TL;DR: German language models trained on a diverse cross-domain dataset outperform models trained on high-quality data alone, leading to new state-of-the-art results.
Abstract: Traditionally, large language models have been either trained on general web crawls or domain-specific data. However, recent successes of generative large language models, have shed light on the benefits of cross-domain datasets. To examine the significance of prioritizing data diversity over quality, we present a German dataset comprising texts from five domains, along with another dataset aimed at containing high-quality data. Through training a series of models ranging between 122M and 750M parameters on both datasets, we conduct a comprehensive benchmark on multiple downstream tasks. Our findings demonstrate that the models trained on the cross-domain dataset outperform those trained on quality data alone, leading to improvements up to 4.45% over the previous state-of-the-art.
Submission Number: 5849
Loading