KLUE: Korean Language Understanding EvaluationDownload PDF

Published: 11 Oct 2021, Last Modified: 22 Oct 2023NeurIPS 2021 Datasets and Benchmarks Track (Round 2)Readers: Everyone
Keywords: Korean Natural Language Understanding Benchmarks, Pre-trained Korean Language Models
Abstract: We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a collection of eight Korean natural language understanding (NLU) tasks, including Topic Classification, Semantic Textual Similarity, Natural LanguageInference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking. We create all of the datasets from scratch in a principled way. We design the tasks to have diverse formats and each task to be built upon various source corpora that respect copyrights. Also, we propose suitable evaluation metrics and organize annotation protocols in a way to ensure quality. To prevent ethical risks in KLUE, we proactively remove examples reflecting social biases, containing toxic content or personally identifiable information (PII). Along with the benchmark datasets, we release pre-trained language models (PLM) for Korean, KLUE-BERT and KLUE-RoBERTa, and find KLUE-Roberta-large outperforms other baselines including multilingual PLMs and existing open-source Korean PLMs. The fine-tuning recipes are publicly open for anyone to reproduce our baseline result. We believe our work will facilitate future research on cross-lingual as well as Korean language models and the creation of similar resources for other languages. KLUE is available at https://klue-benchmark.com.
URL: https://github.com/KLUE-benchmark/KLUE
TL;DR: Releasing natural language understanding benchmarks for Korean including 8 tasks and pre-trained Korean language models for Korean as baseline models.
Supplementary Material: zip
Contribution Process Agreement: Yes
Dataset Url: https://klue-benchmark.com/
Dataset Embargo: N/A
License: CC-BY-SA-4.0 License
Author Statement: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 13 code implementations](https://www.catalyzex.com/paper/arxiv:2105.09680/code)
7 Replies

Loading