Improving Physical Object State Representation in Text-to-Image Generative Systems

Published: 06 May 2025, Last Modified: 06 May 2025SynData4CVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Synthetic data, Text-to-Image Generation, Object State Representation, Generative Models, Visual Alignment, Benchmark Evaluation
TL;DR: We introduce a fully automatic synthetic data generation pipeline that fine-tunes text-to-image models to accurately depict objects in empty or absent states, significantly enhancing semantic alignment without sacrificing visual quality.
Abstract: Current text-to-image generative models struggle to accurately represent object states (e.g., "a table without a bottle," "an empty tumbler"). In this work, we first design a fully-automatic pipeline to generate high-quality synthetic data that accurately captures objects in varied states. Next, we fine-tune several open-source text-to-image models on this synthetic data. We evaluate the performance of the fine-tuned models by quantifying the alignment of the generated images to their prompts using GPT, and achieve an average absolute improvement of 8+% across four models on the public GenAI-Bench dataset. We also curate a collection of 100 prompts with a specific focus on common objects in various physical states. We demonstrate a significant improvement of an average of 27+% over the baseline on this dataset. We will release all the evaluation prompts and codes soon.
Submission Number: 61
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview