Benchmarking Robustness of Text-Image Composed Retrieval

TMLR Paper1877 Authors

28 Nov 2023 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Text-image composed retrieval aims to retrieve the target image through the composed query, which is specified in the form of an image plus some text that describes desired modifications to the input image. It has recently attracted attention due to its ability to leverage both information-rich images and concise language to precisely express the requirements for target images. However, the robustness of these approaches against real-world corruptions or further text understanding has never been studied. In this paper, we perform the first robustness study and establish three new diversified benchmarks for systematic analysis of text-image composed retrieval against natural corruptions in both vision and text and further probe textural understanding. For natural corruption analysis, we introduce two new large-scale benchmark datasets, CIRR-C and FashionIQ-C for testing in open domain and fashion domain respectively, both of which apply 15 visual corruptions and 7 textural corruptions. For textural understanding analysis, we introduce a new diagnostic dataset CIRR-D by expanding the original raw data with synthetic data, which contains modified text to better probe textual understanding ability including numerical variation, attribute variation, object removal, background variation, and fine-grained evaluation.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We polish the paper and highlight the change in blue in the updated version. Especially for the following changes: + The formal definition of the task has been added in Section 3 `Foundation of text-image composed retrieval’. + In Table 1, the major differences among the methods have been added. + The detailed comparison of CIRR-D is added in Section 4.3. + We have revised the confusing statement and image captions in Figure 2 for clarity. + We add additional compared models: Pic2Word and SEARLE + We add additional benchmark: CIRCO-C + We train and evaluate three CLIP4CIR models with i) vit-L/14 LAION-2B, ii) vit-H/14 LAION-2B iii) vit-L/14 LAION-400M as ablation study. + We have done thorough polishing by a native speaker in the same field. (highlighted in red)
Assigned Action Editor: ~Mathieu_Salzmann1
Submission Number: 1877
Loading