Reducing Forgetting In Federated Learning with Truncated Cross-EntropyDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: continual learning, catastrophic forgetting, distribution shifts, federated learning
TL;DR: Inspired by methods in continual learning we propose and analyze a simple approach for supervised federated learning with non-iid data
Abstract: In Federated Learning, a global model is learned by aggregating model updates computed from a set of client nodes, each having their own data. A key challenge in federated learning is the heterogeneity of data across clients whose data distributions differ from one another. Standard federated learning algorithms perform multiple gradient steps before synchronizing the model, which can lead to clients overly minimizing their local objective and diverging from other client solutions, particularly in the supervised learning setting. We demonstrate that in such a setting, individual client models experience the ``catastrophic forgetting" phenomenon with respect to other client data. We propose a simple yet efficient approach that modifies the cross-entropy objective on a per-client basis such that classes outside a client's label set are shielded from abrupt representation change. Through extensive empirical evaluations, we demonstrate that our approach can greatly alleviate this problem, especially in the most challenging federated learning settings with high heterogeneity, low participation, and large numbers of clients.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading