PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering

Published: 20 Jul 2024, Last Modified: 03 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Image composition involves seamlessly integrating given objects into a specific visual context. Current training-free methods rely on composing attention weights from several samplers to guide the generator. However, since these weights are derived from disparate contexts, their combination leads to coherence confusion and loss of appearance information. These issues worsen with their excessive focus on background generation, even when unnecessary in this task. This not only impedes their swift implementation but also compromises foreground generation quality. Moreover, these methods introduce unwanted artifacts in the transition area. In this paper, we formulate image composition as a subject-based local editing task, solely focusing on foreground generation. At each step, the edited foreground is combined with the noisy background to maintain scene consistency. To address the remaining issues, we propose PrimeComposer, a faster training-free diffuser that composites the images by well-designed attention steering across different noise levels. This steering is predominantly achieved by our Correlation Diffuser, utilizing its self-attention layers at each step. Within these layers, the synthesized subject interacts with both the referenced object and background, capturing intricate details and coherent relationships. This prior information is encoded into the attention weights, which are then integrated into the self-attention layers of the generator to guide the synthesis process. Besides, we introduce a Region-constrained Cross-Attention to confine the impact of specific subject-related words to desired regions, addressing the unwanted artifacts shown in the prior method thereby further improving the coherence in the transition area. Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively. The code is available at https://github.com/CodeGoat24/PrimeComposer.
Primary Subject Area: [Generation] Generative Multimedia
Secondary Subject Area: [Content] Vision and Language, [Content] Multimodal Fusion, [Experience] Multimedia Applications
Relevance To Conference: In the image composition task, while the current training-free method has mitigated the need for costly optimization and retraining, it remains incapable of capturing the nuanced appearance of objects and forging dependable coherent relations. Urgent exploration of more effective steering mechanisms for training-free composition, without compromising efficiency, is imperative. In this work, we formulate image composition as a local object-guided editing task, utilizing LDM to depict the object and synthesize natural coherence. We aim to seamlessly synthesize the user-provided object within specific foreground areas effectively without compromising efficiency. To achieve this, we propose PrimeComposer, a faster training-free progressively combined composer that composites the images by well-designed attention steering across different noise levels. Specifically, we leverage the caption prompt to guide the synthesis of transition areas, and the prior attention weights from the Correlation Diffuser (CD) to steer the preservation of object appearance and the establishment of natural coherent relations. The effectiveness of CD is further enhanced through our extension of classifier-free guidance. Besides, we introduce Region-constrained Cross Attention (RCA) to replace the cross-attention layers in LDM. RCA effectively restricts the influence of object-specific words to desired spatial regions in the image, thereby mitigating unexpected artifacts around synthesized objects. Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively.
Supplementary Material: zip
Submission Number: 611
Loading