Client Security Alone Fails in Federated Learning: 2D and 3D Attack Insights

Published: 16 Jul 2024, Last Modified: 16 Jul 2024MICCAI Student Board EMERGE Workshop 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reconstruction attacks, Medical Imaging, privacy attacks, federated learning
TL;DR: In Medical FL, client security alone is insufficient; a malicious server can reconstruct participating client's private training data.
Abstract: Federated learning (FL) plays a vital role in boosting both accuracy and privacy in the collaborative medical imaging field. The im- portance of privacy increases with the diverse security standards across nations and corporations, particularly in healthcare and global FL initia- tives. Current research on privacy attacks in federated medical imaging focuses on sophisticated gradient inversion attacks that can reconstruct images from FL communications. These methods demonstrate potential worst-case scenarios, highlighting the need for effective security measures and the adoption of comprehensive zero-trust security frameworks. Our paper introduces a novel method for performing precise reconstruction attacks on the private data of participating clients in FL settings us- ing a malicious server. We conducted experiments on brain tumor MRI and chest CT data sets, implementing existing 2D and novel 3D re- construction technique. Our results reveal significant privacy breaches: 35.19% of data reconstructed with 6 clients, 37.21% with 12 clients in 2D, and 62.92% in 3D with 12 clients. This underscores the urgent need for enhanced privacy protections in FL systems. To address these is- sues, we suggest effective measures to counteract such vulnerabilities by securing gradient, analytic, and linear layers. Our contributions aim to strengthen the security framework of FL in medical imaging, promoting the safe advancement of collaborative healthcare research. The source code is available at: https://www.github.com/anonymous.
Submission Number: 4
Loading