Abstract: Local differential privacy provides strong user privacy protection but is vulnerable to poisoning attacks launched by malicious users, leading to contaminative estimates. Although various works explore attacks with different manipulation targets, a practical and relatively general defense has remained elusive. In this paper, we address this problem in basic histogram estimation scenarios. We model adversaries as Byzantine users who can collaborate to maximize their attack goals. From the perspective of attackers’ capability, we analyze the impact of poisoning attacks on data utility and introduce a significant threat — the maximal loss attack (MLA). Considering that a high-utility-damage attack would break the smoothness of histograms, we propose the defense method, LDP-Purifier, to sterilize the poisoned histograms. Our extensive experiments validate the effectiveness of the LDP-Purifier, showcasing its ability to significantly suppress estimation errors caused by various attacks.
Loading