On the Efficacy of Differentially Private Few-shot Image Classification

Published: 19 Dec 2023, Last Modified: 19 Dec 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models. These DP models are typically pretrained on large public datasets and then fine-tuned on private downstream datasets that are relatively large and similar in distribution to the pretraining data. However, in many applications including personalization and federated learning, it is crucial to perform well (i) in the few-shot setting, as obtaining large amounts of labeled data may be problematic; and (ii) on datasets from a wide variety of domains for use in various specialist settings. To understand under which conditions few-shot DP can be effective, we perform an exhaustive set of experiments that reveals how the accuracy and vulnerability to attack of few-shot DP image classification models are affected as the number of shots per class, privacy level, model architecture, downstream dataset, and subset of learnable parameters in the model vary. We show that to achieve DP accuracy on par with non-private models, the shots per class must be increased as the privacy level increases. We also show that learning parameter-efficient FiLM adapters under DP is competitive with learning just the final classifier layer or learning all of the network parameters. Finally, we evaluate DP federated learning systems and establish state-of-the-art performance on the challenging FLAIR benchmark.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/cambridge-mlg/dp-few-shot
Supplementary Material: zip
Assigned Action Editor: ~Sanghyun_Hong1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1443
Loading