The Other Side of the Coin: Unveiling the Downsides of Model Aggregation in Federated Learning from a Layer-peeled Perspective

ICLR 2026 Conference Submission14921 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Representation Learning, Model Aggregation
TL;DR: Understanding model aggregation in federated learning using layer-peed feature extraction perspective.
Abstract: In federated learning (FL), model aggregation plays a central role in enabling decentralized knowledge sharing. However, it is often observed that the aggregated model underperforms on local data until after several rounds of local training. This temporary performance drop can potentially slow down the convergence of the FL model. Prior work regards this performance drop as an inherent cost of knowledge sharing among clients and does not give it special attention. While some studies directly focus on designing techniques to alleviate the issue, its root causes remain poorly understood. To bridge this gap, we construct a framework that enables layer-peeled analysis of how feature representations evolve during model aggregation in FL. It focuses on two key aspects: (1) the intrinsic quality of extracted features, and (2) the alignment between features and their subsequent parameters---both of which are critical to downstream performance. Using this framework, we first investigate what model aggregation does to the internal feature extraction process. Our analysis reveals that aggregation degrades feature quality and weakens the coupling between intermediate features and subsequent layers, both of which are well shaped during local training. More importantly, this degradation is not confined to specific layers but progressively accumulates with network depth---a phenomenon we term Cumulative Feature Degradation (CFD). CFD severely impairs the quality of penultimate-layer features, ultimately compromising the model's decision-making capacity. Next, we examine how key FL settings---such as aggregation frequency---can exacerbate or alleviate the negative effects of model aggregation. Finally, we revisit several commonly used strategies, such as initialization from pretrained models, and explain \textbf{why} they are effective through layer-peeled analysis. To the best of our knowledge, this is the first systematic study of model aggregation in FL from a layer-peeled feature extraction perspective, potentially paving the way for the development of more effective FL algorithms. The code is available at:https://anonymous.4open.science/r/ICLR_14921_Code-3565.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 14921
Loading