Abstract: Decentralized federated learning (DFL) enables clients to train a neural network model in a device-to-device (D2D) manner without central coordination. In practical systems, DFL faces challenges due to the dynamic topology changes, time-varying channel conditions, and limited computational capability of devices. These factors can affect the performance of DFL. To address the aforementioned challenges, in this paper, we propose a graph neural network (GNN)-based approach to minimize the total delay on training and improve the learning performance of DFL in D2D wireless networks. In our proposed approach, a multi-head graph attention mechanism is used to capture different features of clients and channels. We design a neighbor selection module which enables each client to select a subset of its neighbors for the participation of model aggregation. We develop a decoder which enables each client to determine its transmit power and CPU frequency. Experimental results show that our proposed algorithm can achieve a lower total delay on training when compared with three baseline schemes. Furthermore, the proposed algorithm achieves similar performance on the testing accuracy when compared with the full participation scheme.
Loading