Keywords: prompt engineering, image generation, diffusion model, text-to-image synthesis
TL;DR: A novel training-free prompt engineering framework that refines user inputs for better productions.
Abstract: The notable gap between user-provided and model-preferred prompts poses a significant challenge for generating high-quality images with text-to-image models, compelling the need for prompt engineering.
Current studies on prompt engineering can effectively enhance the style and aesthetics of generated images.
However, they often neglect the semantic alignment between generated images and user descriptions, resulting in visually appealing but content-wise unsatisfying outputs.
In this work, we propose VisualPrompter, a novel training-free prompt engineering framework that refines user inputs to model-preferred sentences.
VisualPrompter utilizes an automatic self-reflection module that identifies absent concepts in the generated images, followed by a target-specific prompt optimization mechanism which revises the prompts in a fine-grained manner.
By deconstructing prompts, introducing new elements at the atomic semantic level, and then reassembling them, our model is able to maintain semantic consistency and integrity throughout the optimization process.
Extensive experiments demonstrate the effectiveness of VisualPrompter, which achieves new state-of-the-art performance on multiple benchmarks for text-image alignment evaluation.
Additionally, our framework features a plug-and-play design, making it highly adaptable to various generative models.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 19539
Loading