Federated Learning on Small Batch Sizes via Batch Renormalization

Published: 19 Mar 2024, Last Modified: 02 Apr 2024Tiny Papers @ ICLR 2024 ArchiveEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Non-IID, Batch Renormalization
Abstract: When batch size is too small in personalized federated learning, the parameters of the model change significantly due to data variability and outliers, leading to highly stochastic gradients between the layers of the model. This situation ultimately extends the convergence time and poses a significant challenge to the model training process. To address this issue, we propose a method that involves local batch re-normalization before averaging the model. Experimental results demonstrate that this approach effectively improves the accuracy of personalized models trained with small batches in Non-IID scenarios.
Submission Number: 88
Loading