Fedpartwhole: federated domain generalization via consistent part-whole hierarchies

Published: 01 Jan 2025, Last Modified: 14 May 2025Pattern Anal. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Domain Generalization (FedDG) aims to address the challenge of generalizing to unseen domains at test time while adhering to data privacy constraints that prevent centralized data storage from various client domains. Existing approaches can be broadly classified into domain alignment, data manipulation, learning strategies, and optimization of model aggregation weights. This paper introduces a novel approach to FedDG that focuses on the backbone model architecture. The key insight is that objects, even under substantial domain shifts and appearance variations, retain a consistent hierarchical structure of parts and wholes. For example, a photograph and a sketch of a dog share the same structural organization, comprising a head, body, limbs, and so on. Our architecture explicitly integrates a feature representation for the image parse tree, enabling robust generalization across domains. To the best of our knowledge, this is the first work to approach FedDG from a model architecture perspective. We compared the performance of our proposed backbone against a comparable-sized CNN-based backbone (MobileNet) for 5 different algorithms on standard benchmark datasets (PACS and VLCS), and the results showed an average improved performance of up to 17.3%. Additionally, our approach marginally outperforms the Vision Transformer (ViT-Small) on average, despite utilizing approximately 5x fewer parameters. Unlike conventional convolutional neural networks, our method is inherently interpretable, fostering trust in its predictions-a critical asset in federated learning scenarios.
Loading