Differentially Private Vertical Federated Learning with Dual-Sparsification

Published: 2025, Last Modified: 05 Nov 2025WASA (2) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vertical Federated Learning (VFL) is an emerging preserve-protection machine learning scheme that allows multiple clients who have the same samples and different features to work collaboratively. In VFL, incorporating differential privacy techniques can improve the privacy protection of clients. However, existing differential privacy-based VFL methods fail to account for the adverse impact of noise on model performance, thereby reducing their effectiveness. To address this issue, we propose a novel method called Differentially Private Vertical Federated Learning with Dual-Sparsification (DS-VFL). We introduce Gaussian noise to the transmitted embeddings and gradients independently, ensuring the privacy of data features and labels. To enhance model accuracy, we apply sparsification techniques to the input and output layers of the local models in the passive parties. This reduces the sensitivity of the model output, thereby decreasing the standard deviation of the noise. Additionally, we utilize the Gini impurity to enhance the efficiency of initialization. Extensive experiments demonstrated our method significantly improves the performance of VFL compared to SOTA methods.
Loading