Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective

Published: 01 Jan 2025, Last Modified: 02 Mar 2025Artif. Intell. Medicine 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Federated learning (FL) protects data privacy but is vulnerable to data poisoning.•FL existing attacks overlook budget and visibility crucial in real-world scenarios.•Our proposed FL-AGMA, an attention-guided minimal attack, addresses the limitations.•FL-AGMA optimizes budget and visibility, achieving 45% less accuracy and 32% budget.•Evaluation on Covid-chestxray, Camelyon17, and HAM10000 validates FL-AGMA’s efficacy.
Loading