Abstract: Memorization of training data by deep neural networks enables an adversary to mount successful membership inference attacks. Here, an adversary with blackbox query access to the model can infer whether an individual's data record was part of the model's sensitive training data using only the output predictions. This violates the data confidentiality, by inferring samples from proprietary training data, and privacy of the individual whose sensitive record was used to train the model. This privacy threat is profound in commercial embedded systems with on-device processing. Addressing this problem requires neural networks to be inherently private by design while conforming to the memory, power and computation constraints of embedded systems. This is lacking in literature.
0 Replies
Loading