TL;DR: We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example is associated with multiple users
Abstract: We initiate a study of algorithms for model training with user-level differential privacy (DP), where each example may be attributed to multiple users, which we call the multi-attribution model. We first provide a carefully chosen definition of user-level DP under the multi-attribution model. Training in the multi-attribution model is facilitated by solving the contribution bounding problem, i.e. the problem of selecting a subset of the dataset for which each user is associated with a limited number of examples. We propose a greedy baseline algorithm for the contribution bounding problem. We then empirically study this algorithm for a synthetic logistic regression task and a transformer training task, including studying variants of this baseline algorithm that optimize the subset chosen using different techniques and criteria. We find that the baseline algorithm remains competitive with its variants in most settings, and build a better understanding of the practical importance of a bias-variance tradeoff inherent in solutions to the contribution bounding problem.
Lay Summary: When we train models with privacy guarantees, we usually assume each piece of data we train on only has privacy implications for a single person. However, in many settings a piece of data could have privacy implications for multiple people. For example, a text or email message could contain privacy-sensitive information about both the sender and recipients, or a photo may have multiple people's faces. We introduce a privacy guarantee that accommodates having multiple people's privacy associated with each piece of data, and introduce a framework for training models with this privacy guarantee and demonstrate our framework's effectiveness.
Primary Area: Social Aspects->Privacy
Keywords: differential privacy, multi-attribution
Submission Number: 13939
Loading