Keywords: Federated Learning, BERT-based MetaNet, Personalized Differential Privacy, DP Accountant
Abstract: Federated learning (FL) enables multiple clients to train a shared model without sharing raw data, but gradients can still leak sensitive information through inversion and membership inference attacks. Differential privacy (DP) mitigates this risk by clipping gradients and adding calibrated noise, but most DP-FL methods rely on static noise and clipping schedules. Such rigid designs fail to account for client heterogeneity, changing convergence dynamics, and the growth of cumulative privacy loss. To address these challenges, we propose FedMAP, a closed-loop framework for adaptive differential privacy in FL. FedMAP integrates three components. First, a client-side MetaNet predicts clipping bounds and noise scales $(C_t,\sigma_t)$ from gradient statistics using a lightweight pretrained BERT-tiny backbone, enabling effective adaptation across communication rounds. Second, a server-side Rényi DP accountant tracks heterogeneous privacy costs, computes the global expenditure $\varepsilon_{\mathrm{global}}$, and broadcasts it as a budget signal that constrains cumulative loss and guides client adaptation. Third, a global feedback regularization mechanism combines local penalties on per-round privacy cost with global penalties from $\varepsilon_{\mathrm{global}}$, ensuring alignment between client adaptation and the overall budget. Experiments show that FedMAP improves privacy compliance, and offers stronger robustness against attacks compared with baselines.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10194
Loading