Construct a Secure CNN Against Gradient Inversion Attack

Published: 01 Jan 2024, Last Modified: 26 Jan 2025PAKDD (3) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning enables collaborative model training across multiple clients without sharing raw data, adhering to privacy regulations, which involves clients sending model updates (gradients) to a central server, where they are aggregated to improve a global model. Despite its benefits, federated learning faces threats from gradient inversion attacks, which can reconstruct private data from gradients. Traditional defenses, including cryptography, differential privacy, and perturbation techniques, offer protection but may suffer from a reduction in computational efficiency and model performance. Thus, in this paper, we introduce Secure Convolutional Neural Networks (SecCNN), a novel approach embedding an upsampling layer into CNNs to inherently defend against gradient inversion attacks. SecCNN leverages Rank Analysis for enhanced security without sacrificing model accuracy or incurring significant computational costs. Our results demonstrate SecCNN’s effectiveness in securing federated learning against privacy breaches, thereby building trust among participants and advancing secure collaborative learning.
Loading