Abstract: Style transfer, the blending of content from one image with the style of another, has advanced significantly through two primary approaches: neural network-based methods like Neural Style Transfer and recent text-to-image diffusion models such as Stable Diffusion. In particular, inversion-based methods like Textual Inversion, DreamBooth, and Custom Diffusion further enhance the process by embedding new styles from reference images. This paper evaluates the performance of these methods in style transfer using two datasets-paintings of “Edward Hopper” from the WikiArt dataset and the Peanuts Comic Strip dataset-and explores the impact of the number of reference style images used during training. Our study highlights the current capabilities and future potential of diffusion-based style transfer.
External IDs:dblp:conf/iccel/KimMKL25
Loading