Model Inversion Attack against a Face Recognition System in a Black-Box SettingDownload PDFOpen Website

2021 (modified: 04 Nov 2022)APSIPA ASC 2021Readers: Everyone
Abstract: A DNN-based face recognition system implicitly has the information of facial characteristics of the individuals registered in it. The information could be maliciously revealed or stolen by a model inversion attack (MIA), which causes a serious privacy issue. To clarify how much the threat of MIA is real, methods to perform MIA against a face recognition system have been studied in recent years. Theoretically, MIA is formulated as a problem of finding the best image that maximizes the recognition score outputted by a target recognition system. This can be achieved by a gradient descent technique if the target system is a white box whose network structure and parameters are known, as assumed in the most existing methods. However, this assumption is not necessarily realistic. Unlike the existing methods, in this paper, we propose an MIA method that can be carried out against a black-box system. To enable the proposed method to generate natural-looking face images, we first introduce a deep face generator that generates a face image from a random feature vector, by which MIA is re-defined as a problem of finding the best feature vector instead of the best image. The proposed method solve this problem by a gradient descent technique, where we numerically approximates the gradient of the recognition score by perturbing the current feature vector several times. Our experimental results demonstrate that the proposed method can generate natural-looking face images successfully containing personal facial characteristics, whose performance is comparable to the white-box-oriented existing methods.
0 Replies

Loading