Tackling Distribution Shifts in Federated Learning with Superquantile AggregationDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop DistShift SpotlightReaders: Everyone
Keywords: federated learning, distribution shift, superquantile, tail performance
TL;DR: We address a train-test distribution shift in federated learning with a distributionally-robust superquantile aggregation approach.
Abstract: Federated learning has emerged as the predominant framework for distributed machine learning over decentralized data, e.g. in mobile phones. The usual approaches suffer from a distribution shift: the model is trained to fit the average population distribution but is deployed on individual clients, whose data distributions can be quite different. We present a distributionally robust approach to federated learning based on a risk measure known as the superquantile and show how to optimize it by interleaving federated averaging steps with quantile computation. We demonstrate experimentally that our approach is competitive with usual ones in terms of average error and outperforms them in terms of tail statistics of the error.
1 Reply

Loading