TL;DR: FedCONST improves feature generalization in Federated Learning (FL) by adaptively adjusting model updates based on parameter strength using convex constraints, leading to state-of-the-art performance.
Abstract: Federated learning (FL) often struggles with generalization due to heterogeneous client data. Local models are prone to overfitting their local data distributions, and even transferable features can be distorted during aggregation. To address these challenges, we propose FedCONST, an approach that adaptively modulates update magnitudes based on the global model’s parameter strength. This prevents over-emphasizing well-learned parameters while reinforcing underdeveloped ones. Specifically, FedCONST employs linear convex constraints to ensure training stability and preserve locally learned generalization capabilities during aggregation. A Gradient Signal-to-Noise Ratio (GSNR) analysis further validates FedCONST's effectiveness in enhancing feature transferability and robustness. As a result, FedCONST effectively aligns local and global objectives, mitigating overfitting and promoting stronger generalization across diverse FL environments, achieving state-of-the-art performance.
Lay Summary: Federated learning (FL) allows multiple devices to collaboratively train machine learning models without sharing their data. However, when data is distributed unevenly across devices, models often overfit to local patterns and fail to generalize well. Our method, FedCONST, addresses this challenge by adjusting how much each part of the model is updated, based on how strongly that part has already learned. This helps prevent over-updating well-trained parts while encouraging weaker parts to catch up. By adding lightweight constraints during training, FedCONST improves stability and helps preserve useful features. As a result, it achieves stronger generalization and state-of-the-art performance across a variety of FL scenarios.
Link To Code: https://github.com/skku-dhkim/FedTorch.git
Primary Area: General Machine Learning
Keywords: Federated Learning, Convex Optimization
Submission Number: 10600
Loading