Abstract: Federated domain adaptation (FDA) allows one to train large-scale machine learning models over networked systems that adapt to novel target domains. Existing FDA methods suffer from an excessive communication overhead in aligning feature distributions between source and target domains. In this paper, we propose FedRF-Adapt, a communication-efficient FDA protocol that enjoys a sample-size-independent communicational complexity and is robust to limited network reliability. Extensive numerical experiments are provided to support the advantageous performance of FedRF-Adapt.
Loading