Keywords: Vertical Federated Learning, Differential Privacy, Zeroth Order Optimization
Abstract: Vertical Federated Learning (VFL) enables collaborative training with feature-partitioned data, yet remains vulnerable to label leakage through gradient transmissions. In this work, we propose DPZV, a gradient-free VFL framework that achieves tunable differential privacy (DP) with formal performance guarantees. By leveraging zeroth-order (ZO) optimization, DPZV eliminates explicit gradient exposure. It further enhances security by providing provable differential privacy guarantees. Standard DP techniques like DP-SGD are difficult to apply in zeroth-order VFL due to VFL’s distributed nature and the high variance incurred by vector-valued noise. DPZV overcomes these limitations by injecting low-variance scalar noise at the server, enabling controllable privacy with reduced memory overhead. We conduct a comprehensive theoretical analysis showing that DPZV attains convergence rate comparable to first order (FO) optimization methods while satisfying formal $(\epsilon, \delta)$-DP guarantees. Experiments on image and language benchmarks demonstrate that DPZV outperforms several baselines in terms of achieved accuracy under a wide range of privacy constraints ($\epsilon \leq 10$), thereby elevating the privacy-utility tradeoff in VFL.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 12606
Loading