Joint Top-K Sparsification and Shuffle Model for Communication-Privacy-Accuracy Tradeoffs in Federated-Learning-Based IoV

Published: 01 Jan 2024, Last Modified: 28 Sept 2024IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The Internet of Vehicles (IoV) connects a massive amount of smart vehicles for inter/intra-vehicle information sharing. Data privacy issues, such as privacy leakage and privacy cost, are the key challenges that hinder vehicle operators from sharing their data safely. Traditional privacy-preserving techniques, including federated learning (FL) and differential privacy (DP) techniques, can protect data privacy and security, but the high-privacy cost severely limits learning performance. In addition, the IoV services place high demands on low-communication latency, which can be obtained by reducing the communication bits, but it also limits the learning performance. Thus, how to solve the communication-privacy-accuracy tradeoffs to achieve low latency, high-privacy preservation and model performance has been a complicated issue in IoV. In this article, a privacy-enhancement differentially private FL framework (FedSDP) is proposed based on the shuffle model to ensure secure and efficient data sharing under the constraint of low latency in IoV. In our proposed framework, four privacy enhancement methods are proposed, including data subsampling, vehicle sampling, shuffle model, and dummy points, to amplify the privacy and obtain higher learning performance. Then, a Top-K sparsification mechanism of the vehicle training process is proposed to reduce communication bits. Finally, the experimental results indicate that our approach can reduce the communication latency by 31.66%, enhance the privacy $\epsilon _{c}$ by 30.77% and improve the test accuracy by 48.56%, compared with the traditional SDP mechanism.
Loading