Abstract: Image composition has advanced significantly with large-scale pre-trained T2I diffusion models; however, despite the progress in same-domain composition, cross-domain composition remains under-explored. The main challenges are the stochastic nature of diffusion models and the style gap between input images, which often lead to failures and artifacts, and the heavy reliance on text prompts that limits practical applications. This paper presents the first cross-domain image composition method that does not require text prompts, enabling natural stylization and seamless compositions. Our method is efficient and robust, preserving the diffusion prior by involving only minor steps for backward inversion and forward denoising without training the diffuser, and it employs a simple multilayer perceptron network to integrate CLIP features from the foreground and background, manipulating the diffusion process using a local cross-attention strategy. This effectively preserves foreground content while enabling stable stylization without the need for a pre-stylization network. Furthermore, we create a benchmark dataset with diverse contents and styles for fair evaluation, addressing the lack of testing datasets for cross-domain image composition. Experimental results show that our method outperforms state-of-the-art techniques in both qualitative and quantitative evaluations, significantly improving the LPIPS score by 30.5% and the CSD metric by 18.1%. We believe our method will advance future research and applications.
Loading