CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion

Published: 09 Apr 2024, Last Modified: 25 Apr 2024SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Composed image retrieval, diffusion models, synthetic dataset
TL;DR: A diffusion model-based zero-shot composed image retrieval method, with synthetic 18.8M training triplets
Abstract: We propose a novel diffusion-based model, CompoDiff, for solving zero-shot Composed Image Retrieval (ZS-CIR) with latent diffusion. This paper also introduces a new synthetic dataset, named SynthTriplets18M, with 18.8 million reference images, conditions, and corresponding target image triplets. CompoDiff and SynthTriplets18M tackle the shortages of the previous CIR approaches, such as poor generalizability due to the small dataset scale and the limited types of conditions. CompoDiff not only achieves a new state-of-the-art on four ZS-CIR benchmarks, including FashionIQ, CIRR, CIRCO, and GeneCIS, but also enables a more versatile and controllable CIR by accepting various conditions, such as negative text, and image mask conditions. Code and dataset are available at https://github.com/navervision/CompoDiff
Submission Number: 36
Loading