Abstract: The emergence of deep learning has led to the rise of malicious face manipulation applications, which pose a significant threat to face security. In order to prevent the generation of forgery fundamentally, researchers have proposed proactive methods to disrupt the process of manipulation models. However, these methods output distorted images, which exhibit unacceptable black shadows or distorted facial features causing facial stigmatization. To address this issue, we propose a Universal Proactive Warning Defense (UPWD) method, which leads fake images to present a warning pattern against multiple manipulation models. Specifically, we proposed an Invisible Protection Module that generates protection messages and a feature-level measure strategy that enhances the salience of warning patterns. Furthermore, we improve the universality of the method based on Hard Model Meta-learning. Extensive experimental results on CelebA and LFWA datasets demonstrate that our proposed UPWD method effectively defends against multiple manipulation models and outperforms existing methods.
0 Replies
Loading