Dual-Domain Diffusion Based Progressive Style Rendering towards Semantic Structure PreservationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: In this paper, we propose a Dual-Domain Diffusion based Progressive Style Rendering (D3PSR) method to achieve style rendering from the semantic Domain A to the style Domain B. Different from the classic diffusion models, our model takes two unpaired images from two domains as inputs, and the output is obtained at the midst layer. With the benefits from diffusion models, a dynamic rendering process was leveraged to progressively incorporate the texture strokes from the style domain while preserving the semantic structure in the noise-adding steps. Our experiments shows that a range of artistic styles can be successfully transferred into the target images without breaking their semantic structures, demonstrating the merits of our new diffusion-based approach with beyond the state-of-the-art performance in style transferring. A further study utilized the similarity scores to measure such a diffusion-based process, showing how semantic structures were rendered in our progressive process in a quantitative view.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Generative models
8 Replies

Loading