Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models

ACL ARR 2024 June Submission2481 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the uniform bar exam. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (503 examples), (2) 4 legal reasoning tasks (270 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Legal NLP, benchmarking, NLP datasets
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: Korean
Submission Number: 2481
Loading