Abstract: We investigate the intricate relation between robustness of a Deep Neural Network (DNN) model, a typical safety property, and membership inference, a prominent attack on privacy. To this end, we introduce the notion of Patient Membership Inference in the context of personalized health and precision medicine where personalized models are often adopted. Given a set of patients and a model trained using the data of one of them, Patient Membership Inference aims at identifying the patient whose data was used for training. For this, we leverage on the specificities of robustness of the model when considering data from different patients. In contrast to the classical membership inference, where the task is to determine whether a certain sample has been part of the training set, patient membership inference does not assume access to training data. As such, patient membership inference also demonstrates that access to training data is not necessary for membership inference and that membership inference is possible even for well-generalized models, not suffering from overfitting. We evaluate and demonstrate that robustness may be used to infer membership in the context of two healthcare application domains, i.e., epileptic seizure and cardiac-rhythm abnormality detections.
External IDs:dblp:conf/ijcnn/BaninajjarAHAR25
Loading