Jointly Learning from Decentralized (Federated) and Centralized Data to Mitigate Distribution ShiftDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: federated learning, decentralized learning, privacy, security, distribution shift, distribution skew, mobile computing
TL;DR: Combining Federated Learning with additional centralized/datacenter data is an effective way to mitigate train vs. inference distribution shift, this paper presents several strategies for how to do it.
Abstract: With privacy as a motivation, Federated Learning (FL) is an increasingly used paradigm where learning takes place collectively on edge devices, each with a cache of user-generated training examples that remain resident on the local device. These on-device training examples are gathered in situ during the course of users’ interactions with their devices, and thus are highly reflective of at least part of the inference data distribution. Yet a distribution shift may still exist; the on-device training examples may lack for some data inputs expected to be encountered at inference time. This paper proposes a way to mitigate this shift: selective usage of datacenter data, mixed in with FL. By mixing decentralized (federated) and centralized (datacenter) data, we can form an effective training data distribution that better matches the inference data distribution, resulting in more useful models while still meeting the private training data access constraints imposed by FL.
1 Reply