Keywords: Chinese spelling correction, explainable deep learning, associative knowledge network, explainable statistic, attention distribution, statistical alignment
TL;DR: The proposed AxBERT as an explainable Chinese spelling correction method can achieve a predictable and regulatable correction process with extraordinary performance.
Abstract: Deep learning has shown promising performance on various machine learning tasks. Nevertheless, the unexplainability of deep learning models severely restricts the usage domains that require feature explanations, such as text correction. Therefore, a novel explainable deep learning model (named AxBERT) is proposed for Chinese spelling correction by aligning with an associative knowledge network (AKN). Wherein AKN is constructed based on the co-occurrence relations among Chinese characters, which denotes the explainable statistic logic contrasted with unexplainable BERT logic. And a translator matrix between BERT and AKN is introduced for the alignment and regulation of the attention component in BERT. In addition, a weight regulator is designed to adjust the attention distributions in BERT to appropriately model the sentence semantics. Experimental results on SIGHAN datasets demonstrate that AxBERT can achieve extraordinary performance, especially upon model precision compared to baselines. Our explainable analysis, together with qualitative reasoning, can effectively illustrate the explainability of AxBERT.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies
Loading