Keywords: text-to-image, diffusion models, image generation
Abstract: Recent text-to-image diffusion models have facilitated creative and photorealistic image synthesis. By varying the random seed, we can generate many images for a fixed text prompt. The seed controls the initial noise and, in multi-step diffusion inference, the noise used for reparameterization at intermediate timesteps in the reverse diffusion process. However, the impact of the seed on the generated images remains relatively unexplored. We conduct a scientific study into the influence of seeds during diffusion inference on interpretable visual dimensions and, moreover, demonstrate improved image generation. Our analyses highlight the importance of selecting good seeds and offer practical utility for image generation.
Email Of Author Nominated As Reviewer: katexu2011@gmail.com
Submission Number: 4
Loading