FEDTAIL: Federated Long-Tailed Domain Generalization with Sharpness-Guided Gradient Matching

Published: 10 Jun 2025, Last Modified: 29 Jun 2025CFAgentic @ ICML'25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Decentralized Learning, Parallel Computing, Machine Learning, Long Tail
TL;DR: FedTAIL is a federated domain generalization method that improves robustness under domain shift and class imbalance by aligning gradients, minimizing class-wise sharpness, and adaptively weighting losses based on curvature.
Abstract: Domain Generalization (DG) aims to train models that can generalize to unseen target domains without requiring access to target data. While recent advances in loss landscape smoothing have improved generalization, they often struggle in scenarios with long-tailed class distributions and conflicting optimization objectives. We propose FedTAIL, a federated domain generalization framework that addresses these limitations through sharpness-guided, gradient-aligned learning. Specifically, we introduce a gradient coherence term to reduce conflicts between classification and adversarial losses, enhancing optimization stability. To better handle class imbalance, we perform class-wise sharpness minimization and propose a curvature-aware dynamic weighting scheme that adaptively emphasizes tail classes. Furthermore, we improve conditional distribution alignment by incorporating sharpness-aware perturbations into entropy regularization. Our approach unifies objective harmonization, class-aware robustness, and conditional consistency in a scalable and generalizable framework. Extensive experiments on standard domain generalization benchmarks demonstrate that FedTAIL achieves state-of-the-art performance, particularly under domain shift and label imbalance, validating its effectiveness in both centralized and federated learning settings. Our code is publicly available at: https://github.com/sunnyinAI/FedTail
Submission Number: 34
Loading