Fed-DIP: Federated Domain Generalization by Synergizing Implicit Disentanglement and Context-Aware Prompting
Keywords: Federated Learning Domain Generalization
Abstract: Federated Domain Generalization (FedDG) seeks to train a model on decentralized data from multiple source domains that can generalize effectively to unseen target domains. A fundamental challenge lies in achieving robust feature disentanglement—separating domain-invariant from domain-specific features—which is critical for generalization but severely hindered by the data-isolated nature of Federated Learning. Existing methods often struggle with this, leading to incomplete decoupling and limited model performance. To address this, we propose Fed-DIP, a novel framework that introduces an Implicit Decoupling Distillation mechanism. This mechanism achieves fine-grained feature separation by comparing logit outputs for local image regions, all without direct data access. This allows for the robust aggregation of domain-invariant knowledge while critically preserving rich, domain-specific information at the client side. Furthermore, to unlock the potential of this preserved local knowledge, we introduce the Context-Aware Prompt Encoder (CAPE). Unlike prior works that rely on selective prompting from a fixed set, CAPE is a fully generative solution. It dynamically synthesizes adaptive, end-to-end optimizable text prompts directly from local visual features. These generated prompts provide nuanced, contextual guidance, enabling the model to effectively leverage domain-specific insights for more robust and accurate decision-making. Extensive experiments on benchmarks including PACS, VLCS, OfficeHome, and DomainNet demonstrate that our method achieves state-of-the-art (SOTA) performance, validating the effectiveness and superiority of our framework.
Primary Area: generative models
Submission Number: 7512
Loading