Abstract: Federated learning, hailed as a privacy-preserving collaboration paradigm, has garnered significant attention in research circles. Typically, it involves multiple clients collaborating to integrate multi-party knowledge, facilitating the learning of a shared global model with decentralized local data. Despite the popularity of federated learning, the surge in approaches addressing various realistic challenges has highlighted a critical issue. The aggregated model may struggle to capture diverse domain knowledge across participants, leading to limited performance in cross-client domain scenarios. Furthermore, the incorporation of knowledge from participating parties can hinder generalization on out-of-client distributions. To comprehensively address this challenge, we dissect federated generalization into two dimensions: the participating domain and the unseen domain. In this paper, we propose a novel solution incorporating domain-specialized and invariant experts. These experts are designed to faithfully represent individual domain characteristics and different domain universality. Additionally, we introduce a pioneering test-time expert aggregation strategy that utilizes prediction consistency metrics to aggregate different experts, specifically tailored for handling agnostic testing distributions. Empirical results validate that our proposed methodology significantly enhances federated performance on both cross-client and out-of-client generalization under different scenarios and with various related methods. A comprehensive ablation study demonstrates the effectiveness of the proposed modules.
Loading