SemiSketch: An ancient mural sketch extraction network based on reference prior and gradient frequency compensation
Abstract: Sketches hold considerable research value for archaeologists as they convey the ancient culture, artistic techniques, and social contexts of murals. However, the widespread presence of deteriorated artifacts and the scarcity of artifact databases make it difficult to train a sketch extraction model. This study develops SemiSketch, a novel semi-supervised framework that extracts clean, coherent sketches from ancient murals. By leveraging a dual-branch learning paradigm, it effectively mitigates challenges such as deterioration artifacts, noise, and data scarcity. SemiSketch innovatively uses a pixel-level reference mechanism as an intermediary “pivot” between deteriorated murals and clean sketches. It decomposes the training process into two branches: an unsupervised branch that transforms murals into sketches, and a supervised branch that refines noiseless line styles through pixel-level correspondences. We introduce a shared CNN-hybrid Vision Transformer generator to integrate the two branches, combining CNN-based transpose self-attention and axial attention to capture local and global information, thereby enhancing the extraction of key lines in murals. Additionally, a gradient frequency compensation module is employed to effectively mitigate noise caused by deterioration artifacts, resulting in more complete and cleaner sketches. Empirical evaluations are conducted on various styles of datasets, including the Fengguo Temple Buddhist frescoes, Dunhuang murals, and Indian murals. Extensive experiments show that SemiSketch substantially outperforms a wide range of baselines and effectively extracts clear and coherent sketches. We release the source code at https://github.com/Alice77bai/SemiSketch.
External IDs:dblp:journals/pr/YuWQZMPP26
Loading