Unpaired Multi-scenario Sketch Synthesis via Texture Enhancement

Published: 01 Jan 2024, Last Modified: 17 Apr 2025PRCV (4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Sketch is a vibrant and expressive art form. Traditional methods for synthesizing facial sketches often rely on paired data, cause collecting sketches across multiple scenarios is quite expensive. In contrast, style transfer methods can utilize unpaired data but it frequently struggle to maintain a balance between stylization and structural integrity. To alleviate these problems, we introduce a novel module named the Structure and Enhanced Texture Fusion (SETF) module for unpaired multi-scenario sketch generation. This module operates in three distinct phases: Firstly, calculate the attention map \(M_{cs}\) for both content and sketch features as well as the self-attention map \(M_{s}\) for the sketch features alone. Thus, our network transfer selectively in a manner that is contextually aware and coherent. Secondly, reorganize these attention maps and summed them up as attention-weighted feature A. Thirdly, combine content feature and attention-weighted feature and then normalized, balance the content fidelity and sketch style transformation. The resulting feature map produced by SETF module would incorporate both the content and the sketch features according to the attention mechanism, resulting in a synthesis that is both content-aware and style-specific. Our comprehensive qualitative and quantitative evaluations demonstrate that our method surpasses contemporary state-of-the-art methods in both sketch stylization and structural preservation.
Loading