Abstract: During the training process of federated learning models, the domain information of the target test data on the server can differ greatly from the training data of each client, leading to a decrease in the performance of the federated model. Additionally, due to privacy protection during federated training, clients cannot see the target domain test data, and the distribution information of the target data cannot be used. This poses a new challenge for federated learning. Domain generalization techniques are often used in centralized frameworks to resolve such problems. In recent years, the domain generalization method based on feature decorrelation has enabled models to learn knowledge with a stronger generalization ability in unseen target domain data. However, existing methods require data centralization in the feature decorrelation process, which conflicts with data privacy protection in federated learning. To address these issues, we propose Reinforcement Federated Domain Generalization (RFDG), which incorporates domain generalization in federated learning via reinforcement learning. RFDG can improve the generalization ability of the federated model of unseen target domain test data. We design a reinforcement federated feature decorrelation policy that uses reinforcement learning technology to transform the sample reweight work into a parameterized sample reweight policy that can be shared among federated learning clients. We develop reinforcement federated experience replay techniques to supplement the feature information loss of local data due to the mini-batch mechanism during the policy learning process. When the policy is shared by each client, those features can be decorrelated from a global perspective, allowing the model to focus on capturing the fundamental association between features and labels to learn domain-invariant knowledge. We verified the effectiveness of our method through extensive experiments using four publicly available datasets.
Loading