A Theoretical Framework for Federated Domain Generalization with Gradient Alignment

Published: 11 Oct 2024, Last Modified: 25 Nov 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated learning, Domain generalization, Gradient alignment
TL;DR: In this paper we provide a theoretical framework linking domain shift and gradient alignment under federated setup, which can be beneficial for federated domain generalization.
Abstract: Gradient alignment has shown empirical success in federated domain generalization, yet a theoretical foundation for this approach remains unexplored. To address this gap, we provide a theoretical framework linking domain shift and gradient alignment in this paper. We begin by modeling the similarity between domains through the mutual information of their data. We then show that as the domain shift between clients in a federated system increases, the covariance between their respective gradients decreases. This link is initially established for federated supervised learning and subsequently extended to federated unsupervised learning, showing the consistency of our findings even in a self-supervised setup. Our work can further aid the development of robust models by providing an understanding of how gradient alignment affects learning dynamics and domain generalization.
Is Neurips Submission: No
Submission Number: 58
Loading