Secret Seeds in Text-to-Image Diffusion Models

NeurIPS 2024 Workshop ATTRIB Submission45 Authors

Published: 30 Oct 2024, Last Modified: 14 Jan 2025ATTRIB 2024EveryoneRevisionsBibTeXCC BY 4.0
Release Opt Out: No, I don't wish to opt out of paper release. My paper should be released.
Keywords: text-to-image, diffusion models, image generation
Abstract: Recent text-to-image diffusion models have facilitated creative and photorealistic image synthesis. By varying the random seed, we can generate many images for a fixed text prompt. The seed controls the initial noise and, in multi-step diffusion inference, the noise used for reparameterization at intermediate timesteps in the reverse diffusion process. However, the impact of the seed on the generated images remains relatively unexplored. We conduct a scientific study into the influence of seeds during diffusion inference on interpretable visual dimensions and, moreover, demonstrate improved image generation. Our analyses highlight the importance of selecting good seeds and offer practical utility for image generation.
Submission Number: 45
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview