Cost-Effective Federated Learning: A Unified Approach to Device and Training Scheduling

Published: 01 Jan 2024, Last Modified: 27 Sept 2024ICC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning enables decentralized model training across numerous devices without data centralization, leveraging model updates to enhance privacy and reduce communication overhead. Despite its advantages, federated learning systems must be optimized for cost efficiency, considering the limited computational capabilities and battery life of edge devices. Current research often focuses on minimizing either time or energy costs but rarely both, and does not jointly optimize the parameters of device and training scheduling in the presence of system and data heterogeneity. In our paper, we formulate a novel joint optimization problem for device and training scheduling that minimizes the total cost of federated learning while ensuring model convergence. We propose a new device scheduling scheme, Group Scheduling on Orthogonal Frequency-Division Multiple Access (GS-OFDMA), to improve time efficiency and develop an iterative algorithm to tackle the resulting mixed integer nonlinear programming problem. Our experimental results show that our approach significantly reduces the total cost by at least 35 % across different real-world datasets and data distributions in comparison with random participant selection.
Loading