Abstract: Out-of-distribution issues lead to different optimization directions between clients, which weakens collaborative modeling in federated learning. Existing methods aims to decouple invariant features in the latent space to mitigate attribute bias. However, their performance is limited by suboptimal decoupling capabilities in complex latent spaces. To address this problem, this paper presents a method, termed FedAKD, that adaptively identifies meaningful visual regions in images to guide the model in learning causal features. It includes two main modules, where the attentive modeling module adaptively locates critical regions to mitigate the negative impact of irrelevant elements, which are considered significant contributors to distribution heterogeneity. The attention-guided representation learning module leverages attentive knowledge to guide the local model to pay more attention to important regions, which acts as a soft attention regularizer to mitigate the trade-off between capturing categoryrelevant information and irrelevant contextual information in images. Experiments were conducted on four datasets, including performance comparison, ablation study, and case study. The results demonstrate that FedAKD can effectively enhance attention to causal features, which leads to superior performance compared with the state-of-the-art methods.
Loading