Interactive Verifiable Local Differential Privacy Protocols for Mean Estimation

Published: 01 Jan 2024, Last Modified: 15 May 2025TrustCom 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Local differential privacy (LDP) has emerged as an advanced privacy protection method in local settings. However, due to the nature of perturbation locally in LDP, it may be susceptible to poisoning attacks by malicious users, significantly impacting the accuracy of estimation results. Existing works have demonstrated that interactive verifiable LDP protocols can effectively ensure the perturbation process and prevent output poisoning attacks. However, these works primarily focus on LDP protocols for frequency estimation. In this paper, our focus is on defending against output poisoning attacks in mean estimation for numeric attributes under LDP. We have selected a classic LDP mean estimation protocol for study, namely the Piecewise Mechanism (PM). Given that user data in the PM protocol consists of numeric attributes and its perturbation function follows a continuous probability density distribution, constructing a commitment vector poses a challenge. To tackle this issue, we propose a new method for constructing commitment vectors. This involves discretizing the attribute value domain and adjusting the perturbation probabilities using the minimum movement principle to mitigate the impact on the protocol. Additionally, we enhance the verifiable PM protocol using zero-knowledge proofs to bolster its effectiveness against output poisoning attacks in LDP. We conduct experiments on real datasets to evaluate the proposed interactive LDP protocols, showcasing the improved defense performance of our methods.
Loading