Subject Level Differential Privacy with Hierarchical Gradient AveragingDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: federated learning, differential privacy, subject granularity
TL;DR: This paper introduces a new algorithm, called Hierarchical Gradient Averaging, that enforces subject level Differential Privacy in the Federated Learning setting, and delivers superior model utility than prior approaches.
Abstract: Subject Level Differential Privacy (DP) is a granularity of privacy recently studied in the Federated Learning (FL) setting, where a subject is defined as an individual whose private data is embodied by multiple data records that may be distributed across a multitude of federation users. This granularity is distinct from item level and user level privacy appearing in the literature. Prior work on subject level privacy in FL focuses on algorithms that are derivatives of group DP or enforce user level Local DP (LDP). In this paper, we present a new algorithm – Hierarchical Gradient Averaging (HiGradAvgDP) – that achieves subject level DP by constraining the effect of individual subjects on the federated model. We prove the privacy guarantee for HiGradAvgDP and empirically demonstrate its effectiveness in preserving model utility on the FEMNIST and Shakespeare datasets. We also report, for the first time, a unique problem of privacy loss composition, which we call horizontal composition, that is relevant only to subject level DP in FL. We show how horizontal composition can adversely affect model utility by either in- creasing the noise necessary to achieve the DP guarantee, or by constraining the amount of training done on the model.
Is Student: No
3 Replies