Color-SD: Stable Diffusion Model Already has a Color Style Noisy Latent Space

Published: 01 Jan 2024, Last Modified: 15 Mar 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present Color-SD, a comprehensive color style transfer framework that utilizes either image or text references. Built on the pretrained Stable Diffusion Model, Color-SD exploits an existing color style space, enabling a training-free and tuning-free zero-shot color style transfer method without introducing new parameters. For image references, we first invert the source and reference images to the noisy latent space, followed by parallel sampling. During this process, we execute distribution transformation in the noisy latent space, effectively completing the color style transfer and generating the stylized result. For text references, we capitalize on the Stable Diffusion model’s inherent text-to-image capability. We only invert the source image to the noisy latent, and the given text reference prompt is utilized during the parallel sampling. This approach eliminates the need for training or tuning, yet produces impressive open-set transfer results. Comprehensive experiments validate the effectiveness of our method, demonstrating significant superiority over existing methods in both qualitative and quantitative evaluations.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview