Abstract: Previous license plate recognition (LPR) methods have achieved impressive performance on single-type license plates. However, multi-type license plate recognition is still challenging due to various character layouts and fonts. There are two main problems: one is that recognition models are prone to incorrectly perceive the location of characters due to diverse character layouts, and the other is that characters of different categories may have similar glyphs due to various fonts, causing character misidentification. Therefore, to solve the above problems, we propose two plug-and-play modules based on an attention-based framework for multi-type license plate recognition. First, we propose a global modeling module to integrate character layout information to precisely perceive the location of characters, thus generating accurate predictions. Second, a position-aware contrastive learning module is proposed to enhance the robustness and discriminability of features to alleviate character misidentification of similar glyphs. Finally, to verify the effectiveness and generality, we apply the proposed modules to six baseline models, and the results demonstrate that the proposed method can achieve state-of-the-art performance on three multi-type license plate datasets. Moreover, extensive experiments prove that our proposed modules can significantly improve performance by 6.8% on RODOSOL-ALPR with a small parameter increase.
Loading