Keywords: benchmark, leaderboard, NLP benchmarking, Polish language
TL;DR: In this paper we introduce LEPISZCZE (lepiszczeis the Polish word for glew, the Middle English predecessor of glue) a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark.
Abstract: The availability of compute and data to train larger and larger language models increases the demand for robust methods of benchmarking the true progress of LM training. Recent years witnessed significant progress in standardized benchmarking for English. Benchmarks such as GLUE, SuperGLUE, or KILT have become a de facto standard tools to compare large language models. Following the trend to replicate GLUE for other languages, the KLEJ benchmark\ (klej is the word for glue in Polish) has been released for Polish. In this paper, we evaluate the progress in benchmarking for low-resourced languages. We note that only a handful of languages have such comprehensive benchmarks. We also note the gap in the number of tasks being evaluated by benchmarks for resource-rich English/Chinese and the rest of the world. In this paper, we introduce LEPISZCZE (lepiszcze is the Polish word for glew, the Middle English predecessor of glue), a new, comprehensive benchmark for Polish NLP with a large variety of tasks and high-quality operationalization of the benchmark. We design LEPISZCZE with flexibility in mind. Including new models, datasets, and tasks is as simple as possible while still offering data versioning and model tracking. In the first run of the benchmark, we test 13 experiments (task and dataset pairs) based on the five most recent LMs for Polish. We use five datasets from the Polish benchmark and add eight novel datasets. As the paper's main contribution, apart from LEPISZCZE, we provide insights and experiences learned while creating the benchmark for Polish as the blueprint to design similar benchmarks for other low-resourced languages.
Supplementary Material: pdf
Dataset Url: https://lepiszcze.ml/datasets/
License: Licenses are provided in the HuggingFace dataset cards, in GitHub repositories, and in the LEPISZCZE benchmark website https://lepiszcze.ml/datasets.
Author Statement: Yes
Contribution Process Agreement: Yes
In Person Attendance: Yes