KARTC: Knowledge-Aware Rewriting with Large Language Models for Chinese Government Text Correction

Lei Hu, Wenting Zhang, Lihua Tan, Zhiwen Xie, Guangyou Zhou

Published: 01 Jan 2026, Last Modified: 15 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: This paper aims to address the challenge of correcting semantic, syntactic, and knowledge-related errors in Chinese government texts. The existing methods often overlook content-level inaccuracies and knowledge inconsistencies. To address the above issue, we propose a knowledge-aware rewriting method with large language models (LLMs) for Chinese government text correction (in short KARTC). The proposed KARTC comprehensively integrates structured knowledge bases (KBs) with LLMs with a three-stage hierarchical task-chain framework. To demonstrate the effectiveness of the proposed KARTC, we conduct the experiments on NLPCC 2025 shared task 5. The results show that KARTC achieves the 79.64% in term of accuracy, and ranks the 2nd place on NLPCC 2025 shared task 5.
Loading