Abstract: Much of the literature on differential privacy focuses on item-level privacy, where
loosely speaking, the goal is to provide privacy per item or training example.
However, recently many practical applications such as federated learning require
preserving privacy for all items of a single user, which is much harder to achieve.
Therefore understanding the theoretical limit of user-level privacy becomes crucial.
We study the fundamental problem of learning discrete distributions over k symbols with user-level differential privacy. If each user has m samples, we show that
straightforward applications of Laplace or Gaussian mechanisms require the number of users to be O(k/(mα2
) + k/εα) to achieve an `1 distance of α between the
true and estimated distributions, with the privacy-induced penalty k/εα independent of the number of samples per user m. Moreover, we show that any mechanism
that only operates on the final aggregate counts should require a user complexity
of the same order. We then propose a mechanism such that the number of users
scales as O˜(k/(mα2
) + k/√
mεα) and hence the privacy penalty is Θ( ˜
√
m) times
smaller compared to the standard mechanisms in certain settings of interest. We
further show that the proposed mechanism is nearly-optimal under certain regimes.
We also propose general techniques for obtaining lower bounds on restricted
differentially private estimators and a lower bound on the total variation between
binomial distributions, both of which might be of independent interest.
0 Replies
Loading