Maskstr: Guide Scene Text Recognition Models with Masking

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Text recognition in information loss scenarios like blurriness, occlusion, and perspective distortion is challenging in real-world applications. To enhance robustness, some studies use extra unlabeled data for encoder pretraining. Others focus on improving decoder context reasoning. However, pretraining methods require abundant unlabeled data and high computing resources, while decoder-based approaches risk over-correction. In this paper, we propose MaskSTR, a dual-branch training framework for STR models, using patch masking to simulate information loss. MaskSTR guides visual representation learning, improving robustness to information loss conditions without extra data or training stages. Furthermore, we introduce Block Masking, a novel and straightforward mask generation method, for further performance enhancement. Experiments demonstrate MaskSTR’s effectiveness across CTC, attention, and Transformer decoding methods, achieving significant performance gains and setting new state-of-the-art results.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview