Abstract: Most Personalized Federated Learning (PFL) algorithms merge the model parameters of each client with other (similar or generic) model parameters to optimize the personalized model (PM). However, the merged model parameters in these algorithms may fit low relevance data, thereby limiting the performance of PM. In this paper, we generate similar data for each client through the collaboration of a generic model (GM) on the server, rather than merging model parameters. To train a generator capable of generating data for all classes on the server without real data, we employ the GM as the discriminator in adversarial training with the generator. Additionally, we introduce a similarity assessment metric, which allows for the assessment of the similarity between local data and data from other classes. Nevertheless, the presence of non-IID data among clients can weaken the performance of the GM, consequently impacting the training of the generator and similarity assessment. To address this issue, we design a directive mechanism so that GM can be optimized during adversarial training without the need for additional training. The experimental results validate the superiority of our algorithm over state-of-the-art algorithms in terms of accuracy, loss, and convergence speed.
External IDs:dblp:journals/tmc/CaiXSSWGZ25
Loading