FBSDiff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation
Abstract: Large-scale text-to-image diffusion models have been a revolutionary milestone in the evolution of generative AI and multimodal technology, allowing extraordinary image generation with natural-language text prompts. However, the issue of lacking controllability of such models restricts their practical applicability for real-life content creation, for which attention has been focused on leveraging a reference image to control text-to-image synthesis. This paper contributes a concise and efficient approach that adapts the pre-trained text-to-image (T2I) diffusion model to the image-to-image (I2I) paradigm in a plug-and-play manner, realizing high-quality and versatile text-driven I2I translation without any model training, model fine-tuning, or online optimization. To guide T2I generation with a reference image, we propose to model diverse guiding factors with different frequency bands of diffusion features in DCT spectral space, and accordingly devise a novel frequency band substitution layer that dynamically substitutes a certain DCT frequency band of diffusion features with the corresponding counterpart of the reference image along the reverse sampling process. We demonstrate that our method flexibly enables highly controllable text-driven I2I translation both in the guiding factor and guiding intensity of the reference image, simply by adjusting the type and bandwidth of the substituted frequency band, respectively. Extensive experiments verify the superiority of our approach over related methods in image translation visual quality, versatility, and efficiency.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: Text-guided image-to-image translation is a typical application of multimodal technology which aims to instruct image manipulation with open-domain natural-language text prompts. This work proposes a concise, efficient, and novel method to adapt off-the-shelf large-scale text-to-image diffusion models to the image-to-image paradigm, realizing plug-and-play text-driven image translation without any model training, model fine-tuning, or online optimization. We provide new insights about controllable diffusion process from a novel frequency-domain perspective and contribute a novel frequency band substitution technology, realizing efficient text-driven image translation that is free from source text and cumbersome attention modulations, highly controllable in both guiding factor and guiding intensity of the reference image, and invariant to the used diffusion model backbone, all while achieving superior image-to-image translation performance among existing methods.
Supplementary Material: zip
Submission Number: 3657
Loading