Protective Label Enhancement for Label PrivacyDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Much sensitive data is gathered from the individual device for commercial value without effective safeguards over the past decade, which would bring on serious privacy leakage. Here we review the label differential privacy (label DP), which means that only the labels are sensitive. The generated private labels of previous methods do not consider the label confidence corresponding to the feature and multiple sampling could be employed to identify the true labels. In the paper, a novel approach called Protective Label Enhancement (PLE) is proposed to mask the true label in the label distribution while ensuring that the protective label distribution is utility for training an effective predictive model on the server. Specifically, when we generate the label distribution, the true label is mixed up by choosing several random labels and the true label will be punished when it is at the top of the label distribution. Meanwhile, if the true label almost vanishes, it will be compensated to keep the statistical effectiveness. Furthermore, we provide the corresponding theoretical guarantee that the predictive model is classifier-consistent and that learning with the protective label distribution is ERM learnable. Finally, experimental results clearly validate the effectiveness of the proposed approach for solving the label DP problem.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
4 Replies

Loading