Edge Caching with Federated Unlearning for Low-Latency V2X Communications

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IEEE Commun. Mag. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vehicular-to-everything (V2X) communications have gained popularity as a cutting-edge technology in Internet of Vehicles (loV), ensuring low-latency communication for emerging transportation features. Federated learning (FL), a widely-used distributed collaborative AI approach, is transforming edge caching in V2X communications due to its exceptional privacy protection. However, current FL-based edge caching methods can negatively impact communication performance when non-independent and identically distributed (non-IID) data or invalid data, such as poisoned data, are introduced during the training process. In this article, we present FedFilter, an FL-based edge caching solution designed to address these challenges. FedFilter employs a personalized FL method based on model decomposition and hierarchical aggregation, caching content tailored to the diverse preferences of individual users. This enhances the cache hit rate, reducing backhaul load and service latency. Moreover, FedFilter detects and mitigates the adverse effects of invalid data on the global model, ensuring the Quality of Service (QoS) of V2X communications. A case study is introduced to demonstrate the effectiveness of FedFilter, showing that it not only reduces latency but also effectively removes invalid data while maintaining a high cache hit rate.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview