Abstract: Many federated learning approaches assume that all clients have datasets within the same domain. However, in real-world scenarios, this assumption rarely holds, because clients collect data from distinct environments. Federated domain generalization attempts to address this challenge by improving model generalization to unseen domains. However, existing approaches suffer from a trade-off: they enhance performance in the unseen domains at the cost of degrading performance in the source domains due to the suppression of domain-specific features. To overcome this limitation, we propose Federated Discriminative Ensemble (FedDE) to improve generalization on both unseen and source domains by maximizing domain-invariant feature learning while minimizing domain-specific information loss. FedDE introduces two components: a common model that captures domain-invariant features and a localizer that preserves domain-specific features ignored by the common model. To ensure feature separation, we apply L2-norm regularization and adversarial training to encourage each component to learn distinct types of information. During inference, FedDE employs a client model ensemble strategy, leveraging both domain-invariant and domain-specific knowledge to enhance performance across all domains. This ensemble approach mitigates information loss and significantly boosts accuracy on both the seen and unseen domains. We conducted extensive experiments on multiple benchmark datasets and demonstrated that FedDE outperformed existing methods by achieving superior performance across both the source and unseen domains.
External IDs:doi:10.1016/j.ins.2025.122804
Loading