AfriqueLLM: How Data Mixing and Model Architecture Impact Continued Pre-training for African Languages
Keywords: African LLM, Continue Pretrain, Data Mixture
Abstract: Large language models (LLMs) are increasingly multilingual, yet open models continue to underperform relative to proprietary systems, with the gap most pronounced for African languages. Continued pre-training (CPT) offers a practical route to language adaptation, but improvements on demanding capabilities such as mathematical reasoning often remain limited. This limitation is driven in part by the uneven domain coverage and missing task-relevant knowledge that characterize many low-resource language corpora. We present AfriqueLLM, a suite of open LLMs adapted to 20 African languages through CPT on 26B tokens. We perform a comprehensive empirical study across five base models spanning sizes and architectures, including Llama 3.1, Gemma 3, and Qwen 3, and systematically analyze how CPT data composition shapes downstream performance. In particular, we vary mixtures that include math, code, and synthetic translated data, and evaluate the resulting models on a range of multilingual benchmarks. Our results identify data composition as the primary driver of CPT gains. Adding math, code, and synthetic translated data yields consistent improvements, including on reasoning-oriented evaluations. Within a fixed architecture, larger models typically improve performance, but architectural choices dominate scale when comparing across model families. Moreover, strong multilingual performance in the base model does not reliably predict post-CPT outcomes; robust architectures coupled with task-aligned data provide a more dependable recipe. Finally, our best models improve long-context performance, including document-level translation.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: Language Modeling, Multilingualism and Cross-Lingual NLP, Machine Translation, Question Answering, Resources and Evaluation, Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: Afrikaans, Amharic, Egyptian Arabic, English, Ewe, French, Hausa, Igbo, Kinyarwanda, Lingala, Luganda, Moroccan Arabic, Nyanja, Oromo, Plateau Malagasy, Portuguese, Shona, Somali, Sotho, Swahili, Tigrinya, Tswana, Tunisian Arabic, Twi, Vai, Wolof, Xhosa, Yoruba, Zulu
Submission Number: 1194
Loading