Keywords: Scene Graph Programming, Generative Frameworks, Synthetic Captions, Model Evaluation Metrics, Scene Graph Representation, Visual Element Taxonomy, Compositionality, Human Preference Alignment, Faithfulness Metrics, Scene Attributes, Compositional Data Generation, Diversity in Visual Scenes, Self-Improving Models, Proprietary Model Distillation, Content Moderation, Text-to-Video Generation, Text-to-3D Generation, Open-Source Models, Visual Comprehension, Synthetic Data Applications, Fine-Grained Model Analysis, Scene Complexity, Iterative Fine-Tuning, Compositional Scene Evaluation, AI-Generated Content Detection, Scene Graph Enumeration, VQA Scores, ImageReward Scores, TIFA Scores, Multi-Object Representation
TL;DR: Generate Any Scene is a system leveraging scene graph programming to evaluate and improve Text-to-vision models with infinite compositional and diverse synthetic captions.
Abstract: Generative models like DALL-E and Sora have gained attention by producing implausible images, such as "astronauts riding a horse in space." Despite the proliferation of text-to-vision models that have inundated the internet with synthetic visuals, from images to 3D assets, current benchmarks predominantly evaluate these models on real-world scenes paired with captions. We introduce **Generate Any Scene**, a framework that systematically enumerates scene graphs representing a vast array of visual scenes, spanning realistic to imaginative compositions. **Generate Any Scene** leverages scene graph programming, a method for dynamically constructing scene graphs of varying complexity from a structured taxonomy of visual elements. This taxonomy includes numerous objects, attributes, and relations, enabling the synthesis of an almost infinite variety of scene graphs. Using these structured representations, **Generate Any Scene** translates each scene graph into a caption, enabling scalable evaluation of text-to-vision models through standard metrics. We conduct extensive evaluations across multiple text-to-image, text-to-video, and text-to-3D models, presenting key findings on model performance. We find that DiT-backbone text-to-image models align more closely with input captions than UNet-backbone models. Text-to-video models struggle with balancing dynamics and consistency, while both text-to-video and text-to-3D models show notable gaps in human preference alignment. Additionally, we demonstrate the effectiveness of Generate Any Scene by conducting three practical applications leveraging captions generated by Generate Any Scene: (1) a self-improving framework where models iteratively enhance their performance using generated data, (2) a distillation process to transfer specific strengths from proprietary models to open-source counterparts, and (3) improvements in content moderation by identifying and generating challenging synthetic data.
Submission Number: 1
Loading