Multi-Scale Semantic-Guidance Networks: Robust Blind Face Restoration Against Adversarial Attacks

Published: 15 May 2025, Last Modified: 03 Aug 2025IEEE Transactions on Information Forensics and SecurityEveryoneCC BY 4.0
Abstract: Image processing networks are known to be vulnerable to adversarial examples, where adding carefully crafted adversarial perturbations to the inputs can mislead the model. This paper addresses the problem of robust blind face restoration (BFR) against adversarial attacks. BFR refers to recovering the HQ images from the LQ images, which suffer from diverse unknown degradation, such as noise, blur, artifact removal, low resolution, etc. Although existing BFR methods exhibit good performance, they experience significant degradation when subtle distortions and perturbations are introduced into the input images. This paper is the first to investigate, improve comprehensively, and evaluate BFR methods towards adversarial attacks. Project Gradient Descent (PGD) is employed to generate adversarial examples, and multiple types of attacks were used to thoroughly assess the robustness of various BFR methods across different objectives, regions, and levels. We evaluate the robustness of multiple BFR methods and analyze the advantages of their structures and modules towards adversarial attacks. Experimental results demonstrate that the method utilizing latent feature encoding and pre-trained discrete HQ codebook achieves better robustness than other methods, with the latter outperforming the former. Similarly, multi-scale semantic guidance information also exhibits superior performance in enhancing robustness. Therefore, we propose a powerful BFR method to mitigate this issue while maintaining better performance. Extensive experiments on three real-world datasets demonstrate our method's state-of-the-art robustness in different scenarios.
Loading