Abstract: Wikimedia content is used extensively by the AI community and within the language modeling community in particular. In this paper,
we provide a review of the different ways in which Wikimedia data is curated to use in NLP tasks across pre-training, post-training, and
model evaluations. We point to opportunities for greater use of Wikimedia content but also identify ways in which the language modeling community could better center the needs of Wikimedia editors. In particular, we call for incorporating additional sources of Wikimedia data, a greater focus on benchmarks for LLMs that encode Wikimedia principles, and greater multilingualism in Wikimedia-derived datasets.
Loading