Keywords: Privacy, Metric Learning, Representation Learning, Deep Learning
Abstract: Membership inference attacks (MIAs) are currently considered one of the main privacy attack strategies, and their defense mechanisms have also been extensively explored. However, there is still a gap between the existing defense approaches and ideal models in both performance and deployment costs. In particular, we observed that the privacy vulnerability of the model is closely correlated with the gap between the model's data-memorizing ability and generalization ability. To address it, we propose a new architecture-agnostic training paradigm called Center-based Relaxed Learning (CRL), which is adaptive to any classification model and provides privacy preservation by sacrificing a minimal or no loss of model generalizability. We emphasize that CRL can better maintain the model's consistency between member and non-member data. Through extensive experiments on common classification datasets, we empirically show that this approach exhibits comparable performance without requiring additional model capacity or data costs.
Supplementary Material: zip
List Of Authors: Fang, Xingli and Kim, Jung-Eun
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/JEKimLab/UAI24_CRL
Submission Number: 269
Loading