Abstract: Grammatical Error Correction (GEC) faces the important yet challenging issue of explainability, especially when GEC systems are developed for language learners who often struggle to understand the correction results without reasonable explanations. Extractive evidence words and grammatical error types are two crucial factors of GEC explanations. However, existing work focuses on extracting evidence words and predicting grammatical error types given a source sentence and/or a target sentence as input, ignoring the interaction between explanations and corrections. To bridge the gap, we introduce \textbf{EXGEC}, a unified explainable GEC framework that jointly perform explanation and correction tasks in a sequence-to-sequence generation manner, hypothesizing both tasks would benefit each other. Extensive experiments enable us to fully understand and establish the interaction between tasks. Especially, if models are required to jointly predict corrections and explanations, the performance of both tasks improves compared to their respective single-task baselines. Additionally, we observe that EXPECT, a recent explainable GEC dataset, contains considerable noise that may confuse model training and evaluation. Therefore, we rebuild EXPECT to eliminate the noise, leading to an objective training and evaluation pipeline.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading