FedDTW: Federated Digital Twin Weighting for Mitigating Client Heterogeneity and Unreliable Connectivity

16 Sept 2025 (modified: 30 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Digital Twin, Weight Imputation, Client Heterogeneity, Federated Learning
Abstract: Federated learning in cross-device settings suffers when selected clients fail to participate, producing biased global updates and slower convergence under partial participation. We introduce Federated Digital Twin Weighting (FedDTW)—a lightweight, server-side mechanism that maintains a digital twin of each client’s model to impute missing updates. When a client is unavailable in a round, the server forecasts that client’s current parameters from its historical weight trajectory and uses the forecast in aggregation. We evaluate FedDTW under four realistic participation patterns—Random Client Dropout, Variable Participation Rates, Network Partitions, and Delayed Updates—across four time-series datasets (Beijing Air Quality, LTE, Solar Power, METR-LA) and common forecasting backbones (CNN, RNN/GRU/LSTM, DALSTM-AE). FedDTW consistently tracks the full-participation reference (FPR) more closely than FedAvg and yields up to ≈ 6.11–50.65% lower RMSE in representative settings. These results indicate that simple, low-parameter weight-forecasting can make FL more resilient to unreliable connectivity without changing client-side training.
Primary Area: learning on time series and dynamical systems
Submission Number: 7980
Loading