EXCGEC: A Benchmark of Edit-wise Explainable Chinese Grammatical Error Correction

ACL ARR 2024 June Submission3524 Authors

16 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited scenario, where they ignore the interaction between corrections and explanations. To bridge the gap, this paper introduces the task of EXplainable GEC (**ECXGEC**), which focuses on the integral role of both correction and explanation tasks. To facilitate the task, we propose **EXCGEC**, a tailored benchmark for Chinese EXGEC consisting of 8,216 explanation-augmented samples featuring the design of hybrid edit-wise explanations. We benchmark several series of LLMs in multiple settings, covering post-explaining and pre-explaining. To promote the development of the task, we introduce a comprehensive suite of automatic metrics and conduct human evaluation experiments to demonstrate the human consistency of the automatic metrics for free-text explanations.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: free-text/natural language explanations, GEC, educational applications, benchmarking
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: Chinese
Submission Number: 3524
Loading