This is the way: designing and compiling LEPISZCZE, a comprehensive NLP benchmark for PolishDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: benchmark, leaderboard, NLP benchmarking, Polish language
Abstract: The availability of compute and data to train larger and larger language models increases the demand for robust methods of benchmarking the true progress of LM training. Recent years witnessed significant progress in standardized benchmarking for English. Benchmarks such as GLUE, SuperGLUE, or KILT have become a de facto standard tools to compare large language models. Following the trend to replicate GLUE for other languages, the KLEJ benchmark\ (klej is the word for glue in Polish) has been released for Polish. In this paper, we evaluate the progress in benchmarking for low-resourced languages. We note that only a handful of languages have such comprehensive benchmarks. We also note the gap in the number of tasks being evaluated by benchmarks for resource-rich English/Chinese and the rest of the world. In this paper, we introduce LEPISZCZE (lepiszcze is the Polish word for glew, the Middle English predecessor of glue), a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark. We design LEPISZCZE with flexibility in mind. Including new models, datasets, and tasks is as simple as possible while still offering data versioning and model tracking. In the first run of the benchmark, we test 13 experiments (task and dataset pairs) based on the five most recent LMs for Polish. We use five datasets from the Polish benchmark and add eight novel datasets. As the paper's main contribution, apart from LEPISZCZE, we provide insights and experiences learned while creating the benchmark for Polish as the blueprint to design similar benchmarks for other low-resourced languages.
Author Statement: Yes
TL;DR: In this paper we introduce LEPISZCZE (lepiszczeis the Polish word for glew, the Middle English predecessor of glue) a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark.
License: Licenses are provided in the HuggingFace dataset cards, in GitHub repositories, and in the LEPISZCZE benchmark website https://lepiszcze.ml/datasets.
URL: https://lepiszcze.ml
Supplementary Material: pdf
Dataset Url: https://lepiszcze.ml/datasets/
Contribution Process Agreement: Yes
In Person Attendance: Yes
19 Replies

Loading