Abstract: Recent studies have revealed that Split Federated Learning (SFL) is vulnerable to Model Inversion (MI) attacks, where the attacker can reconstruct clients’ raw data by exploiting collected features. Though achieving results, current defenses are unsatisfactory due to the limited ability to suppress the sensitive information while preserving task-conducive information within features. Since such limited ability can be attributed to insufficient disentanglement of data-feature and feature-task dependencies, we propose a Dual Dependency Disentangling framework for SFL (D3SFL) to strengthen defense ability against MI attacks while maintaining the utility. Specifically, we first propose a variable-structure data-feature dependency decoupling module, which produces privacy-preserving features by learning input-specific sub-networks, therefore enhancing the disentanglement of data-feature dependencies to hide sensitive information. Then, we propose a stochastic feature-task dependency separating module that adopts sparse binary masks to preserve the target-task-critical features and reduce sensitive information, resulting in effective disentanglement of feature-task dependencies for lower privacy leakage and better utility maintenance. Extensive experiments on image-classification datasets (CIFAR-100 and FaceScrub) and the time-series dataset (METR-LA) show that D3SFL outperforms the comparisons, achieving remarkable defense ability against MI attacks (with up to $54\times $ , $17\times $ , and $18\times $ reconstruction MSE on average, respectively) while maintaining better utility (with only 0.13% and 0.06% Accuracy drops over the standard SFL on CIFAR-100 and FaceScrub, respectively, and only a 0.03 MAE increase on METR-LA over CNFGNN). Our code is available at https://github.com/Shawn-CT/D3SFL
External IDs:dblp:journals/tifs/ChenWZDWBLL25
Loading