Keywords: Reconstruction Attacks, Medical Imaging, Privacy Attacks, Federated Learning
TL;DR: Client security in federated learning for medical imaging isn't enough; a malicious server can exploit 2D and 3D reconstruction attacks to access private data, stressing the need for stronger, system-wide protections to ensure privacy.
Abstract: Federated learning (FL) plays a vital role in boosting both
accuracy and privacy in the collaborative medical imaging field. The im-
portance of privacy increases with the diverse security standards across
nations and corporations, particularly in healthcare and global FL ini-
tiatives. Current research on privacy attacks in federated medical imag-
ing focuses on sophisticated gradient inversion attacks that can recon-
struct images from FL communications. These methods demonstrate
potential worst-case scenarios, highlighting the need for effective secu-
rity measures and the adoption of comprehensive zero-trust security
frameworks. Our paper introduces a novel method for performing pre-
cise reconstruction attacks on the private data of participating clients
in FL settings using a malicious server. We conducted experiments on
brain tumor MRI and chest CT data sets, implementing existing 2D
and novel 3D reconstruction techniques. Our results reveal significant
privacy breaches: 35.19% of data reconstructed with 6 clients, 37.21%
with 12 clients in 2D, and 62.92% in 3D with 12 clients. This under-
scores the urgent need for enhanced privacy protections in FL systems.
To address these issues, we suggest effective measures to counteract such
vulnerabilities by securing gradient, analytic, and linear layers. Our con-
tributions aim to strengthen the security framework of FL in medical
imaging, promoting the safe advancement of collaborative healthcare re-
search. The source code is available at: https://github.com/MIC-DKFZ/2D3D-Privacy-Attacks-In-federated-Medical-Imaging
Camera Ready Submission: zip
Submission Number: 4
Loading