Abstract: Diffusion models (DMs) are state-of-the-art generative models that learn to model complex data distributions based on iterative noise addition and denoising. Thanks to their superior capacity in generative tasks, DMs have been investigated for various applications in the communication field such as network optimization, channel estimation, semantic communication, and cybersecurity. However, recent studies have shown their vulnerability regarding backdoor attacks, in which backdoored DMs consistently generate a designated harmful result called backdoor target when the models' input contains a backdoor trigger. Although various backdoor techniques have been investigated to attack DMs, defense methods against these threats are still limited and underexplored. In this paper, we introduce PureDiffusion, a novel backdoor defense framework that can efficiently detect backdoor attacks by inverting backdoor triggers embedded in DMs. Our extensive experiments on various trigger-target pairs show that PureDiffusion outperforms existing defense methods with a large gap in terms of fidelity (i.e., how much the inverted trigger resembles the original trigger) and backdoor success rate (i.e., the rate that the inverted trigger leads to the corresponding backdoor target). Notably, in certain cases, backdoor triggers inverted by PureDiffusion even achieve higher attack success rate than the original triggers.
External IDs:dblp:conf/icc/TruongL25
Loading