Abstract: Soft–biometric privacy-enhancing techniques represent machine learning methods that aim to: i) mitigate privacy concerns associated with face recognition by suppressing soft-biometric attributes in facial images (e.g., gender, age, ethnicity) and ii) make unsolicited extraction of sensitive personal information infeasible. Because such techniques are increasingly used in real-world applications, it is imperative to understand to what extent the privacy enhancement can be inverted and how much attribute information can be recovered from the privacy-enhanced images. While these aspects are critical, they have not been investigated in the literature so far. In this paper, we therefore study the robustness of state-of-the-art soft-biometric privacy-enhancing techniques to attribute recovery attempts. We propose PrivacyProber, a high-level framework for restoring soft-biometrics from privacy-enhanced images and apply it for attribute recovery in comprehensive experiments on three public face datasets (LFW, MUCT, and Adience). Our experiments show the proposed framework is able to restore a considerable amount of suppressed information, regardless of the privacy-enhancing technique used (e.g., adversarial perturbations, conditional synthesis, etc.), and that there are significant differences between the considered privacy models. These results point to the need for novel mechanisms to improve the robustness of existing techniques and secure them against adversaries trying to restore suppressed information. We also demonstrate that PrivacyProber can be used to detect privacy enhancement (under black-box assumptions) with high accuracy.
Loading