From KMMLU-Redux to Pro: A Professional Korean Benchmark Suite for LLM Evaluation

ACL ARR 2025 May Submission5582 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The development of Large Language Models (LLMs) requires robust benchmarks that encompass not only academic domains but also industrial fields to effectively evaluate their applicability in real-world scenarios. In this paper, we introduce two Korean expert-level benchmarks. KMMLU-Redux, reconstructed from the existing KMMLU consists of questions from the Korean National Technical Qualification exams, with critical errors removed to enhance reliability. KMMLU-Pro is based on Korean National Professional Licensure exams to reflect professional knowledge in Korea. Our experiments demonstrate that these benchmarks comprehensively represent industrial knowledge in Korea.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: evaluation, benchmarking, evaluation methodologies
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: Korean
Submission Number: 5582
Loading