Keywords: Machine Learning, Differential Privacy, Computational Learning Theory, Sample Complexity, Half-spaces
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
TL;DR: We prove a new sample complexity for privately learning half-space.
Abstract: We present a differentially private learner for half-spaces over a finite domain $\mathcal{X}^d\subseteq\mathbb{R}^d$ with sample complexity $\tilde{O}(d\cdot\log^*\vert\mathcal{X}\vert)$, which improves over $\tilde{O}(d^{2.5}\cdot 8^{\log^*\vert\mathcal{X}\vert})$, the state-of-the-art result of [Kaplan et al., 2020]. The building block of our result is a novel differentially private algorithm that learns half-spaces by iteratively learning thresholds on angles.
A Signed Permission To Publish Form In Pdf: pdf
Supplementary Material: pdf
Primary Area: Trustworthy Machine Learning (accountability, explainability, transparency, causality, fairness, privacy, robustness, autoML, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 192
Loading