Keywords: Federated Learning, Model Heterogeneity, Hardware Heterogeneity, Domain Adaptation, Distributed Learning, Privacy, Generalization
TL;DR: We propose a Federated conditional moment alignment algorithm in the model heterogeneous federated learning setup and derive the convergence and generalization properties of the algorithm.
Abstract: In this work, we study model heterogeneous Federated Learning (FL) for classification where different clients have different model architectures. Unlike existing works on model heterogeneity, we neither require access to a public dataset nor do we impose constraints on the model architecture of clients and ensure that the clients' models and data are private. We prove a generalization result, that provides fundamental insights into the role of the representations in FL and propose a theoretically grounded algorithm Federated Conditional Moment Alignment (FedCMA) that aligns class conditional distributions of each client in the feature space. We prove the convergence and empirically, we show that \pap outperforms other baselines on CIFAR-10, MNIST, EMNIST, FEMNIST in the considered setting.
Is Student: Yes