Track: long paper (up to 4 pages)
Keywords: Diffusion Models, Product Recontextualization, Object Personalization, Synthetic Data Augmentation, Novel View Synthesis, E-commerce
TL;DR: A novel method to significantly improve fidelity and quality of product recontextualization (a.k.a object personalization) on hard real world datasets using diffusion models.
Abstract: We present a framework for high-fidelity product image recontextualization using text-to-image diffusion models and a novel data augmentation pipeline. This pipeline leverages image-to-video diffusion, in/outpainting, and counterfactual generation to create synthetic training data, addressing limitations of real-world data collection for this task. Our method improves the quality and diversity of generated images by disentangling product representations and enhancing the model's understanding of product characteristics. Evaluation on the ABO dataset and a private product dataset, using automated metrics and human assessment, demonstrates the effectiveness of our framework in generating realistic and compelling product visualizations, with implications for diverse applications such as e-commerce and virtual product showcasing.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 31
Loading