Understanding the Role of Layer Normalization in Label-Skewed Federated Learning

Published: 06 Feb 2024, Last Modified: 06 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Layer normalization (LN) is a widely adopted deep learning technique especially in the era of foundation models. Recently, LN has been shown to be surprisingly effective in federated learning (FL) with non-i.i.d. data. However, exactly why and how it works remains mysterious. In this work, we reveal the profound connection between layer normalization and the label shift problem in federated learning. To understand layer normalization better in FL, we identify the key contributing mechanism of normalization methods in FL, called feature normalization (FN), which applies normalization to the latent feature representation before the classifier head. Although LN and FN do not improve expressive power, they control feature collapse and local overfitting to heavily skewed datasets, and thus accelerates global training. Empirically, we show that normalization leads to drastic improvements on standard benchmarks under extreme label shift. Moreover, we conduct extensive ablation studies to understand the critical factors of layer normalization in FL. Our results verify that FN is an essential ingredient inside LN to significantly improve the convergence of FL while remaining robust to learning rate choices, especially under extreme label shift where each client has access to few classes.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/huawei-noah/Federated-Learning/tree/main/Layer_Normalization
Supplementary Material: zip
Assigned Action Editor: ~Jasper_Snoek1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1629
Loading