C-LLM: Learn to Check Chinese Spelling Errors Character by Character

ACL ARR 2024 June Submission5390 Authors

16 Jun 2024 (modified: 07 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Chinese Spell Checking (CSC) aims to detect and correct spelling errors in sentences. Despite Large Language Models (LLMs) exhibit robust capabilities and are widely applied in various tasks, their performance on CSC is often unsatisfactory. We find that LLMs fail to meet the Chinese character-level constraints of the CSC task, namely equal length and phonetic similarity, leading to a performance bottleneck. Further analysis reveal that this issue stems from the granularity of tokenization, as current mixed character-word tokenization struggles to satisfy these character-level constraints. To address this issue, we propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character. Character-level tokenization enables the model to learn character-level alignment, effectively mitigating issues related to character-level constraints. Furthermore, CSC is simplified to replication-dominated and substitution-supplemented tasks. Experiments on two CSC benchmarks demonstrate that C-LLM achieves a 2.1\% enhancement in general scenarios and a significant 12\% improvement in vertical domain scenarios compared to existing methods, establishing state-of-the-art performance.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Chinese Spell Checking, Large Language Models
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: Chinese
Submission Number: 5390
Loading