Abstract: Machine learning has a vast outreach in principal applications and uses large amount of data to train the models, prompting a viable and easy to use Machine Learning as a Service (MLaaS). This flexible paradigm however, could have immense privacy implications since the training data often contains sensitive features, and adversarial access to such models could pose a security risk. In adversarial attacks such as model inversion attack on a system used for face recognition, an adversary uses the output (target label) to reconstruct the input (image of the target individual from the training dataset). To avert such a vulnerability of the system, in this paper, we develop a novel approach of applying perceptual hash to parts of the given training images that leverages the functional mechanism of image hashing. The facial recognition system is then trained over this newly created dataset of perceptually hashed images and high classification accuracy is observed. Furthermore, we demonstrate a series of model inversion attacks emulating adversarial access that yield hashed images of target individuals instead of the original training dataset images; thereby preventing original image reconstruction and counteracting the inversion attack. Through rigorous empirical evaluations of applying the proposed formulation over real world dataset, we verify the effectiveness of our proposed framework in protecting the training image dataset and counteracting inversion attack.
0 Replies
Loading