Abstract: This paper aims to address the challenge of correcting semantic, syntactic, and knowledge-related errors in Chinese government texts. The existing methods often overlook content-level inaccuracies and knowledge inconsistencies. To address the above issue, we propose a knowledge-aware rewriting method with large language models (LLMs) for Chinese government text correction (in short KARTC). The proposed KARTC comprehensively integrates structured knowledge bases (KBs) with LLMs with a three-stage hierarchical task-chain framework. To demonstrate the effectiveness of the proposed KARTC, we conduct the experiments on NLPCC 2025 shared task 5. The results show that KARTC achieves the 79.64% in term of accuracy, and ranks the 2nd place on NLPCC 2025 shared task 5.
External IDs:doi:10.1007/978-981-95-3352-7_37
Loading