Abstract: Incredible capacity of machine learning models to mine the underlying information has led to concerns of privacy disclosure. This makes privacy-preserving learning algorithms become a hot spot. In this paper, we focus on Gaussian processes classification (GPC) with a provable secure and feasible privacy model, differential privacy (DP). First we apply a functional mechanism to design a basic privacy-preserving GP classifier. This involves finding the sensitivity of the outputs, and adding a Gaussian process noise proportional to the sensitivity to the trained classifier. Then we propose a variant-noise mechanism to perturb the classifier with different scaled noise based on the density of dataset. We show that this method can significantly reduce the added noise, whilst sufficiently maintaining the accuracy of the classifier both in theory and experiments.
0 Replies
Loading