Abstract: Federated Learning (FL) is a popular distributed machine learning method that enables the development of a robust global model through decentralized computation and periodic model aggregation, without requiring direct access to clients’ data. However, data heterogeneity poses a significant challenge in FL, and the global long-tail distribution exacerbates this issue. While substantial research has focused on mitigating performance degradation caused by long-tailed distributions, existing methods typically concentrate on addressing discrepancies between local and global class distributions, often overlooking the fact that these discrepancies stem from variations in the data itself. To address this, we propose a novel approach, Federated Context Optimization and Feature Information Decoupling (FedDR), which generates partition strategies for each sample to extract and leverage long-tail, global, personalized, and label-text information within its features to enhance the representational distinction of tail classes. Specifically, we first design a Feature Information Decoupling module that separates global, personalized, and long-tail information within the features and incorporates this information into the loss function to strengthen the global model’s focus on personalized information in tail samples. Furthermore, to exploit the textual label information embedded in the samples, we integrate a cross-modal model, CoOp, which utilizes open-vocabulary prior knowledge, and implement dynamic knowledge distillation between the client model and CoOp to enhance the client model’s feature representation capability. Extensive experimental results on multiple benchmarks demonstrate that the proposed FedDR outperforms state-of-the-art methods in the federated long-tailed learning setting.
External IDs:dblp:journals/tmc/ZhouWQXSQ26
Loading