DSI2I: Dense Style for Unpaired Exemplar-based Image-to- Image Translation

Published: 29 Apr 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar, without ground- truth input-translation pairs. Existing UEI2I methods represent style using one vector per image or rely on semantic supervision to define one style vector per object. Here, in contrast, we propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information. We then rely on perceptual and adversarial losses to disentangle our dense style and content representations. To stylize the source content with the exemplar style, we extract unsupervised cross-domain semantic correspondences and warp the exemplar style to the source content. We demon- strate the effectiveness of our method on four datasets using standard metrics together with a localized style metric we propose, which measures style similarity in a class-wise man- ner. Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=fBrOJF7Pt7
Changes Since Last Submission: Revision based on the decision: - Changes in Section 3.3 to clarify our intuition for the style components and our losses. - Added Figure 3 to visualize the simulated correspondence matrices - Reference to Table 3 in Section 4.4 Figure numbers are shifted up by 1, starting from Figure 3, due to the addition of Figure 3.
Video: https://github.com/IVRL/dsi2i
Code: https://github.com/IVRL/dsi2i
Assigned Action Editor: ~Sungwoong_Kim2
Submission Number: 1729
Loading