Keywords: Federated Learning, Resource Management, Client Selection
Abstract: Federated learning (FL) revolutionizes machine learning by decentralizing
data processing. It allows local devices to train models
on their data and share updates with a central server, preserving
privacy and optimizing bandwidth. Despite its potential, FL
encounters challenges, especially in client selection, due to the
non-independent and identically distributed (non-IID) nature of
client data that can lead to performance deterioration, and the impracticality
of engaging all clients simultaneously due to resource
constraints and increased training expenses. To address these issues,
we propose a novel Largest Distance Client Selection (LDCS) method
that prioritizes clients based on the divergence of their local models
from the global model, as quantified by the Frobenius norm. This
strategy aims to optimize client participation by focusing on those
with the most significant potential to enhance the global model,
thereby improving training efficiency and model performance while
overcoming the limitations of existing random or loss-based approaches.
Experimental outcomes demonstrate that, in comparison
with four existing client selection methods, our method achieves
improvements of up to 5% and expedites the convergence process,
with speed enhancements reaching as high as 8.5%.
Submission Number: 12
Loading