Training-Free Style and Content Transfer by Leveraging U-Net Skip Connections in Stable Diffusion

Published: 20 Dec 2025, Last Modified: 20 Dec 2025CVPR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: image editing, diffusion models, skip connections
Abstract: Recent advances in diffusion models for image generation have led to detailed examinations of several components within the U-Net architecture for image editing. While previous studies have focused on the bottleneck layer (h-space), cross-attention, self-attention, and decoding layers, the overall role of the skip connections of the U-Net itself has not been specifically addressed. We conduct thorough analyses on the role of the skip connections and find that the residual connections passed by the third encoder block carry most of the spatial information of the reconstructed image, splitting the content from the style, passed by the remaining stream in the opposed decoding layer. We show that injecting the representations from this block can be used for text-based editing, precise modifications, and style transfer. We compare our method, SkipInject, to state-of-the-art style transfer and image editing methods and demonstrate that our method obtains the best content alignment and optimal structural preservation tradeoff.
Camera Ready Version: zip
Submission Number: 30
Loading