Abstract: Scene texts collected from unconstrained environments encompass various types of degradation, including low-resolution, cluttered backgrounds, and irregular shapes. Training a model for text recognition with such types of degradations is notoriously hard. In this work, we analyze this problem in terms of two attributes: semantic and a geometric attribute, which are crucial cues for describing low-quality text. To handle this issue, we propose a new Self-supervised Attribute-Aware Refinement Network (SAAR-Net) that addresses these attributes simultaneously. Specifically, a novel text refining mechanism is combined with self-supervised learning for multiple auxiliary tasks to solve this problem. In addition, it can extract semantic and geometric attributes important to text recognition by introducing mutual information constraint that explicitly preserves invariant and discriminative information across different tasks. Such learned representation encourages our method to evidently generate a clear image, thus leading to better recognition performance. Extensive results demonstrate the effectiveness in refinement and recognition simultaneously.
Loading