Ranking Recovery under Privacy Considerations

Published: 21 Jul 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We consider the private ranking recovery problem, where a data collector seeks to estimate the permutation/ranking of a data vector given a randomized (privatized) version of it. We aim to establish fundamental trade-offs between the performance of the estimation task, measured in terms of probability of error, and the level of privacy that can be guaranteed when the noise mechanism consists of adding artificial noise. Towards this end, we show the optimality of a low-complexity decision rule (referred to as linear decoder) for the estimation task, under several noise distributions widely used in the privacy literature (e.g., Gaussian, Laplace, and generalized normal model). We derive the Taylor series of the probability of error, which yields its first and second-order approximations when such a linear decoder is employed. We quantify the guaranteed level of privacy using differential privacy (DP) types of metrics, such as $\epsilon$-DP and $(\alpha,\epsilon)$-Rényi DP. Finally, we put together the results to characterize trade-offs between privacy and probability of error.
Submission Length: Long submission (more than 12 pages of main content)
Video: https://user-images.githubusercontent.com/64623798/180063063-37b3a07a-9e12-4928-896c-09e8b8ee6b33.mp4
Assigned Action Editor: ~Gautam_Kamath1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 119
Loading