Abstract: Cross-device federated learning (FL) enables collaborative model training across heterogeneous edge devices while preserving data privacy. However, system heterogeneity remains a major challenge, especially under constrained computation and memory resources. Although model compression—particularly knowledge distillation—has been widely used to reduce overhead, it inevitably introduces model heterogeneity, leading to degraded performance. To address this overlooked issue, we propose a resource-adaptive approach for data-heterogeneity-aware federated learning (AHFL), a resource-adaptive and data-heterogeneity-aware FL framework. AHFL employs three coordinated strategies: a data-driven client grouping mechanism to assess and exploit heterogeneity levels, adaptive model compression (AMC) tailored to each group’s resource profile, and a novel group distribution representation (GDR) module with theoretical convergence guarantees to mitigate performance degradation caused by model heterogeneity. Extensive experiments demonstrate that AHFL reduces computational cost by $1.7\times $ while simultaneously improving global accuracy by 4.13%. In experiments using the same compressed architecture, AHFL narrows the global–local accuracy gap to under 2%, achieving accuracy improvements of + 7.47% (local) and + 3.29% (global). The code is available at: https://github.com/CST-FederatedLearning/AHFL
External IDs:dblp:journals/iotj/YangZSZ25
Loading