Abstract: Scene text retrieval aims to find all images containing the query text from an image gallery. Current efforts tend to adopt an Optical Character Recognition (OCR) pipeline, which requires complicated text detection and/or recognition processes, resulting in inefficient and inflexible retrieval. Different from them, in this work we propose to explore the intrinsic potential of Contrastive Language-Image Pre-training (CLIP) for OCR-free scene text retrieval. Through empirical analysis, we observe that the main challenges of CLIP as a text retriever are: 1) limited text perceptual scale, and 2) entangled visual-semantic concepts. To this end, a novel model termed FDP (Focus, Distinguish, and Prompt) is developed. FDP first focuses on scene text via shifting the attention to text area and probing the hidden text knowledge, and then divides the query text into content word and function word for processing, in which a semantic-aware prompting scheme and a distracted queries assistance module are utilized. Extensive experiments show that FDP significantly enhances the inference speed while achieving better or competitive retrieval accuracy. Notably, on the IIIT-STR benchmark, FDP surpasses the state-of-the-art method by 4.37% with a 4 times faster speed. Furthermore, additional experiments under phrase-level and attribute-aware scene text retrieval settings validate FDP's particular advantages in handling diverse forms of query text.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Engagement] Multimedia Search and Recommendation
Relevance To Conference: Scene text retrieval is an inherent multimedia task, which involves two modalities including image and text. This work presents a new insight that utilizing the pre-trained visual-language model CLIP to realize more efficient and flexible scene text retrieval, which can promote the research and application of multimedia processing.
Supplementary Material: zip
Submission Number: 710
Loading