Privately Learning SubspacesDownload PDF

21 May 2021, 20:43 (edited 27 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Differential Privacy, Learning, Machine Learning, Data Privacy, Subspaces
  • TL;DR: We give differentially private algorithms to learn the subspace, in which the data lies, with no cost in sample complexity in terms of the ambient dimension.
  • Abstract: Private data analysis suffers a costly curse of dimensionality. However, the data often has an underlying low-dimensional structure. For example, when optimizing via gradient descent, the gradients often lie in or near a low-dimensional subspace. If that low-dimensional structure can be identified, then we can avoid paying (in terms of privacy or accuracy) for the high ambient dimension. We present differentially private algorithms that take input data sampled from a low-dimensional linear subspace (possibly with a small amount of error) and output that subspace (or an approximation to it). These algorithms can serve as a pre-processing step for other procedures.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies

Loading