Model Selection Based on DRL: Improving Personal Model Performance in Federated Learning

Published: 01 Jan 2024, Last Modified: 15 Jul 2025WCNC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Nowadays, Federated learning (FL) is popular as it achieves distributed model training while allowing data to stay locally. It trains a global model by aggregating a selected set of local models from participants' local data. However, the global model may not perform well for all participants, especially when participants' data distributions are non-IID. Participants actually care more about the Personal Model Performance (PMP), i.e., the model performance on their own data distribution, instead of the model performance on all data. In this paper, we design a model selection method to assign a personalized set of models for each participant to maximize PMP. We first propose a model selection metric, that is model similarity. We prove theoretically that selecting models similar to a participant's own local model can make the aggregated model closer to the ideal one. Then we design a DRL-based model selection method to maximize PMP for each participant. By careful design and dimension reduction of actions and states, our TD3-based model selection method achieves the highest PMP compared with baselines. Moreover, it has a transfer ability, which means a model selection agent trained on a dataset, e.g., MNIST, works well on another similar dataset, e.g., FMNIST.
Loading