Hierarchical Federated Learning in MEC Networks with Knowledge Distillation

Published: 09 Oct 2024, Last Modified: 07 Mar 20252024 International Joint Conference on Neural NetworksEveryoneCC BY 4.0
Abstract: Modern automobiles are equipped with advanced computing capabilities, allowing them to become powerful computing units capable of processing a large amount of data and training machine learning models. However, machine learning algorithms typically require a large centralized dataset, raising concerns about users’ privacy. Federated Learning (FL) is a distributed machine learning paradigm that tackles this problem, allowing intelligent vehicles to collaboratively train machine learning models locally without having to compromise their private data. Multiple works have concentrated on applying Federated Learning on Mobile Edge Computing (MEC) networks with a 3-tier architecture consisting of mobile clients, edge servers, and cloud servers, where the edge server aggregates its local set of clients, and the cloud server aggregates edge servers to learn a global model. This approach helps reduce the expensive communication costs to the far-away cloud server. However, this 3-tier paradigm faces several challenges, a notable one being clients’ constant mobility, leading to regional edges having a fluctuating set of participating clients at each round, which we refer to as distribution drift. This phenomenon introduces instability to the local training process, leading to suboptimal accuracy and convergence. As a solution, we propose a local training process based on the knowledge distillation mechanism. Specifically, we employ the global model and an ensemble of historical regional models from the edge servers as sources of knowledge to guide the local training process, preventing the local models from drifting away from the global knowledge and preserving information from clients that left the region. Experimental results showed that the proposed method helps achieve better performance compared to other baselines.
Loading