Chinese character recognition with radical-structured stroke trees

Published: 01 Jan 2024, Last Modified: 08 Apr 2025Mach. Learn. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The flourishing blossom of deep learning has witnessed the rapid development of Chinese character recognition. However, it remains a great challenge that the characters for testing may have different distributions from those of the training dataset. Existing methods based on a single-level representation (character-level, radical-level, or stroke-level) may be either too sensitive to distribution changes (e.g., induced by blurring, occlusion, and zero-shot problems) or too tolerant to one-to-many ambiguities. In this paper, we represent each Chinese character as a stroke tree, which is organized according to its radical structures, to fully exploit the merits of both radical and stroke levels in a decent way. We propose a two-stage decomposition framework, where a Feature-to-Radical Decoder decomposes each character into a radical sequence and a Radical-to-Stroke Decoder further decomposes each radical into the corresponding stroke sequence. The generated radical and stroke sequences are encoded as a radical-structured stroke tree (RSST), which is fed into a Tree-to-Character Translator based on the proposed Weighted Edit Distance to match the closest candidate character in the RSST lexicon. We have conducted extensive experiments on various datasets, such as handwritten, printed artistic, scene character datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art single-level methods by increasing margins as the distribution difference becomes more severe in the blurring, occlusion, and zero-shot scenarios. For example, compared with the previous SOTA method, our method improve performance by 1.74–7.58% in the handwritten character zero-shot settings.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview