Swap-and-Spoil: Untargeted Byzantine Attacks via Class-Consistent View Swaps in Vertical Federated Learning
Keywords: Byzantine Attacks, Vertical Federated Learning, Federated Learning Security, Untargeted Poisoning
Abstract: Vertical Federated Learning (VFL) secures a highly privacy-preserving multi-party training paradigm in which features are vertically distributed across participants for the same sample space. Security attacks against VFL have gained attention recently, but most discussions revolve around data poisoning attacks such as backdoor attacks. Byzantine attack against a federated learning system can target the main model performance and drop its accuracy with a single adversary participating in the training. While such untargeted Byzantine attacks have been explored in horizontal settings, they still remain underexplored in vertical settings of federated systems. In this paper, we demonstrate how an adversary can mount a successful untargeted Byzantine attack that drives down the global model’s inference-time accuracy. To realize this, we perform a consistent cluster-based swapping in the feature space, creating a persistent and poisoned cross-view association during training. The model internalizes this adversary-induced association and, when evaluated on clean, correctly aligned data, fails dramatically. We also show that, the widely-practiced defenses in VFL fail to detect the attack without degrading the model performance. Through this endeavour, our findings establish untargeted Byzantine attacks as a real, underexplored threat to VFL and motivate the design of robust, VFL-specific defenses.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24077
Loading