Abstract: Federated Learning (FL) is an emerging direction in distributed machine learning that enables jointly training a global model without sharing data with server. However, data heterogeneity biases the parameter aggregation at the server, leading to slower convergence and poorer accuracy of the global model. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Though effective, they lack a deep understanding of cross-client features. In this paper, we propose a saliency latent space feature aggregation method (FedSLS) across federated clients. By Guided BackPropagation (GBP), we transform deep models into powerful and flexible visual fidelity encoders, applicable to general state inputs across different image domains, and achieve powerful aggregation in the form of saliency latent features. Notably, since GBP is label-insensitive, it is sufficient to capture saliency features only once on each client. Experimental results demonstrate that FedSLS leads to significant improvements over the state-of-the-arts in terms of accuracies, especially in highly heterogeneous settings. For example, on CIFAR-10 dataset, FedSLS achieves 63.43% accuracy within the strongly heterogeneous environment α=0.05, which is 6% to 23% higher than other baselines.
Loading