Abstract: The issue of face privacy protection has aroused wide social concern along with the increasing applications of face images. The latest methods focus on achieving a good privacy-utility tradeoff so that the protected results can still be used to support the downstream computer vision tasks. However, they may suffer from limited flexibility in manipulating this tradeoff because the practical requirements may vary under different scenarios. In this paper, we present a two-stage latent representation reorganization (LReOrg) framework for face image privacy protection relying on our conditional bidirectional network which is optimized by using a distinct keyword-based swap training strategy with a multi-task loss. The privacy sensitive information are anonymized in the first stage and the destroyed useful information are recovered in the second stage according to user requirements. LReOrg is advantageous in: (a) enabling users to recurrently process fine-grained attributes; (b) providing flexible control over privacy-utility tradeoff by manipulating which attributes to anonymize or preserve using cross-modal keywords; and (c) eliminating the need of data annotations for network training. The experimental results on benchmark datasets have reported the superior ability of our approach for providing flexible protection on facial information.
Primary Subject Area: [Generation] Social Aspects of Generative AI
Secondary Subject Area: [Generation] Social Aspects of Generative AI
Relevance To Conference: The issue of face privacy protection has aroused wide social concerns along with the increasing application of face images that carry lots of personal information. For example, the current face classifiers and DeepFake tools may easily read personal information and generate an illegal clone avatar, which may bring about unknown troubles to inviduals or organizations. In this paper, we present a novel latent representation reorganization framework based on our carefully crafted generative method by allowing users to flexibility determine how to make a good balance between privacy protection and utility preservation. This study can not only prevent the unprotected face images from being freely accessed and misused by unauthorized users and attackers, but also can enable the resulted data to be continuously used in the downstream computer vision tasks without worrying about the constrains from privacy, ethics, laws and regulations.
Supplementary Material: zip
Submission Number: 4186
Loading