Quantifying and Defending against the Privacy Risk in Logit-based Federated Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Logit-based Federated Learning, Privacy Attack, Defense
TL;DR: We identify and quantify a hidden privacy risk in logit-based federated learning methods and then propose a simple but effective perturbation based defense strategy against this privacy risk.
Abstract: Federated learning (FL) aims to protect data privacy by collaboratively learning a model without sharing private data among clients. Novel logit-based FL methods share model outputs (i.e., logits) on public data instead of model weights or gradients during training to enable model heterogeneity, reduce communication overhead and preserve clients’ privacy. However, the privacy risk of these logit-based methods is largely overlooked. To the best of our knowledge, this research is the first theoretical and empirical analysis of a hidden privacy risk in logit-based FL methods – the risk that the semi-honest server (adversary) may learn clients’ private models from logits. To quantify the impacts of the privacy risk, we develop an effective attack named Adaptive Model Stealing Attack (AdaMSA) by leveraging historical logits during training. Additionally, we provide a theoretical analysis on the bound of this privacy risk. We then propose a simple but effective defense strategy that perturbs the transmitted logits in the direction that minimizes the privacy risk while maximally preserving the training performance. The experimental results validate our analysis and demonstrate the effectiveness of the proposed attack and defense strategy.
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5691
Loading