Learning to Generate Image Embeddings with User-Level Differential Privacy

Published: 01 Jan 2023, Last Modified: 30 Sept 2024CVPR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Small on-device models have been successfully trained with user-level differential privacy (DP) for next word prediction and image classification tasks in the past. However, existing methods can fail when directly applied to learn embedding models using supervised training data with a large class space. To achieve user-level DP for large image-to-embedding feature extractors, we propose DP-FedEmb, a variant of federated learning algorithms with per-user sensitivity control and noise addition, to train from user-partitioned data centralized in the datacenter. DP-FedEmb combines virtual clients, partial aggregation, private local fine-tuning, and public Pre-training to achieve strong privacy utility trade-offs. We apply DP-FedEmb to train image embedding models for faces, landmarks and natural species, and demonstrate its superior utility under same privacy budget on benchmark datasets DigiFace. EMNIST, GLD and iNaturalist. We further illustrate it is possible to achieve strong user-level DP guarantees of ∊ < 2 while controlling the utility drop within 5%, when millions of users can participate in training.
Loading