Abstract: Computer vision techniques have revolutionized digital mural inpainting. However, single-stage networks often yield suboptimal results with blurred textures and structural distortion, while existing progressive strategies struggle to effectively balance local and global information. To address these limitations, we propose a novel generative adversarial model that progressively reconstructs mural details by adaptively integrating multi-scale local features and global context based on damage severity. We first obtain initial coarse results using an encoder-decoder network. Then, a mask-guided network adaptively extracts and fuses local features according to damage levels. Next, multi-level residual learning further refines details at different scales. Finally, a global network captures overall artistic characteristics using an optimized Transformer-UNet architecture. In this way, our method harmonizes detailed local restoration with the preservation of overall artistic integrity throughout the progressive inpainting process. Extensive experiments on multiple mural datasets demonstrate that our method achieves state-of-the-art performance in terms of texture clarity and structural coherence. We release the source code at https://github.com/Kk01Qq/Mural-Inpainting.
External IDs:dblp:journals/eswa/QuKYPWHPP26
Loading