Keywords: Federated learning, Domain generalization, Gradient alignment
TL;DR: In this paper we provide a theoretical framework linking domain shift and gradient alignment under federated setup, which can be beneficial for federated domain generalization.
Abstract: Gradient alignment has shown empirical success in federated domain generalization, yet a theoretical foundation for this approach remains unexplored. To address this gap, we provide a theoretical framework linking domain shift and gradient alignment in this paper. We begin by modeling the similarity between domains through the mutual information of their data. We then show that as the domain shift between clients in a federated system increases, the covariance between their respective gradients decreases. This link is initially established for federated supervised learning and subsequently extended to federated unsupervised learning, showing the consistency of our findings even in a self-supervised setup. Our work can further aid the development of robust models by providing an understanding of how gradient alignment affects learning dynamics and domain generalization.
Is Neurips Submission: No
Submission Number: 58
Loading