Multi-Source Collaborative Style Augmentation and Domain-Invariant Learning for Federated Domain Generalization
Keywords: Federated learning, domain generalization, transfer learning
TL;DR: Federated Multi-Source Collaborative Style Augmentation and Domain-Invariant Learning
Abstract: Federated domain generalization aims to learn a generalizable model from multiple decentralized source domains for deploying on the unseen target domain. Style augmentation approaches have achieved significant advancements on domain generalization. However, existing style augmentation approaches either explore the data styles within isolated source domain or interpolate the style information across existing source domains under the data decentralization scenario, which leads to limited style space. To address this issue, we propose a Federated Multi-source Collaborative Style Augmentation and Domain-invariant learning approach, i.e., Fed-MCSAD, for federated domain generalization. Specifically, we propose a federated multi-source collaborative style augmentation module to generate data in broader style spaces. In addition, we conduct domain-invariant learning between the original data and augmented data based on cross-domain feature alignment within the same class. Furthermore, we carry out classes relation ensemble distillation among diverse classes to learn a domain-invariant model. By alternatively conducting collaborative style augmentation and domain-invariant learning, the model can generalize well on unseen target domain. Extensive experiments on multiple domain generalization datasets indicate that Fed-MCSAD significantly outperforms (by up to 4.3% on average accuracy) the state-of-the-art federated domain generalization approaches.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 9402
Loading