How Few Davids Improve One Goliath: Federated Learning in Resource-Skewed Edge Computing Environments

Published: 01 Jan 2024, Last Modified: 06 Feb 2025WWW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Real-world deployment of federated learning requires orchestrating clients with widely varied compute resources, from strong enterprise-grade devices in data centers to weak mobile and Web-of-Things devices. Prior works have attempted to downscale large models for weak devices and aggregate shared parts among heterogeneous models. A typical architectural assumption is that there are equally many strong and weak devices. In reality, however, we often encounter resource skew where a few (1 or 2) strong devices hold substantial data resources, alongside many weak devices. This poses challenges-the unshared portion of the large model rarely receives updates or gains benefits from weak collaborators.We aim to facilitate reciprocal benefits between strong and weak devices in resource-skewed environments. We propose RecipFL, a novel framework featuring a server-side graph hypernetwork. This hypernetwork is trained to produce parameters for personalized client models adapted to device capacity and unique data distribution. It effectively generalizes knowledge about parameters across different model architectures by encoding computational graphs. Notably, RecipFL is agnostic to model scaling strategies and supports collaboration among arbitrary neural networks. We establish the generalization bound of RecipFL through theoretical analysis and conduct extensive experiments with various model architectures. Results show that RecipFL improves accuracy by 4.5% and 7.4% for strong and weak devices respectively, incentivizing both devices to actively engage in federated learning.
Loading