Beyond Cropped Regions: New Benchmark and Corresponding Baseline for Chinese Scene Text Retrieval in Diverse Layouts
TL;DR: We introduce a new benchmark for Chinese scene text retrieval, highlighting the limitations of previous methods and proposing an approach that outperforms existing techniques.
Abstract: Chinese scene text retrieval is a practical task that aims to search for images containing visual instances of a Chinese query text. This task is extremely challenging because Chinese text often features complex and diverse layouts in real-world scenes. Current efforts tend to inherit the solution for English scene text retrieval, failing to achieve satisfactory performance. In this paper, we establish a Diversified Layout benchmark for Chinese Street View Text Retrieval (DL-CSVTR), which is specifically designed to evaluate retrieval performance across various text layouts, including vertical, cross-line, and partial alignments. To address the limitations in existing methods, we propose Chinese Scene Text Retrieval CLIP (CSTR-CLIP), a novel model that integrates global visual information with multi-granularity alignment training. CSTR-CLIP applies a two-stage training process to overcome previous limitations, such as the exclusion of visual features outside the text region and reliance on single-granularity alignment, thereby enabling the model to effectively handle diverse text layouts. Experiments on existing benchmark show that CSTR-CLIP outperforms the previous state-of-the-art model by 18.82% accuracy and also provides faster inference speed. Further analysis on DL-CSVTR confirms the superior performance of CSTR-CLIP in handling various text layouts. The dataset and code will be publicly available to facilitate research in Chinese scene text retrieval.
Lay Summary: This paper focuses on helping computers find images that contain a specific piece of Chinese text provided by the user. For example, if someone searches for a certain phrase, the system will look through a large image collection to find pictures where that phrase appears. This task is challenging because Chinese text in real-world images can appear in many complex ways—such as written vertically, spread across lines, or partially cut off. To address this, the authors created a new dataset that includes many different and difficult text layouts. They also developed a new system that helps computers better understand both the image and the text it contains. Their approach performs much better than previous systems and is faster, making it useful for real-world applications like document search, visual archiving, or navigation aids. The dataset and code will be released to support future research.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Applications->Computer Vision
Keywords: Scene text retrieval, Multimodal retrieval, Text understanding, Text layout
Submission Number: 16125
Loading