Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model ReprogrammingDownload PDF

24 Aug 2022 (modified: 05 May 2023)SaTML 2023Readers: Everyone
Keywords: Model Reprogramming, Federated Learning, Differential Privacy
TL;DR: Our paper proposes Reprogrammable federated learning that outperforms standard transfer learning and train from scratch approaches in improving privacy-utility tradeoff
Abstract: Model reprogramming (MR) is an emerging and powerful technique that provides cross-domain machine learning by enabling a model that is well-trained on some source task to be used for a different target task without finetuning the model weights. In this work, we propose Reprogrammable-FL, the first framework adapting MR to the setting of differentially private federated learning (FL), and demonstrate that it significantly improves the utility-privacy tradeoff compared to standard transfer learning methods (full/partial finetuning) and training from scratch in FL. Experimental results on several deep neural networks and datasets show up to over 60\% accuracy improvement given the same privacy budget. The code repository can be found at https://github.com/IBM/reprogrammble-FL.
0 Replies

Loading