Unconventional Face Adversarial Attack

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICANN (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper advocates a kind of new and interesting adversarial attack concept, coined unconventional face adversarial attack (UFAA). In contrast to the traditional adversarial attack, UFAA can generate face images that can be identified by computers but cannot be recognized by human eyes. As for face recognition, we argue that humans and computers may follow different manners, focusing on latent identify features and visible appearance features, respectively. Accordingly, we propose a dual-branch face feature coupled network (FFCN) to disentangle the two components of appearance and identity of the face image. Particularly, we learn appearance and identity features separately and meanwhile use an identify-appearance coupled module to make the two branches exploit complementary information. Further, two reconstruction layers are employed to generate the corresponding appearance and identity images from the feature space. To ensure the completeness of information extraction, we propose a reunion module to fuse the appearance and identity image, reproducing the original input image as a self-supervision for FFCN learning. Experimental results on popular face datasets show that the attacked face images basically neither hinder the identification by face recognizers nor reveal discriminative appearance to human eyes, which quite agrees with the advocated UFAA paradigm. The new type of adversarial attack we advocate can be used to protect the privacy of facial identities, under scenarios where people can see but cannot understand.
Loading