Poster: Raising the Temporal Misalignment in Federated Learning

Published: 01 Jan 2023, Last Modified: 16 May 2025ICDCS 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The rapid evolution of public knowledge is the trend of the present era; rendering previously collected data susceptible to obsolescence. The continuously generated new knowledge could further affect the performance of the model trained with previous data, such a phenomenon is called temporal misalignment. A vanilla mitigation approach is to periodically update the model in a centralized learning scheme. However, in a decentralized learning framework like Federated Learning (FL), such a patch requires clients to upload the data, which contradicts FL's intention to protect clients' privacy. Furthermore, considering the stationary defenses in FL, new knowledge could be misjudged and rejected as malicious attacks, which hinders the further update of the model. Yet dynamically adapting defenses requires meticulous fine-tuning and harms the scalability. Thus in this poster, we raise such practical concern and discuss it in the context of FL. We then build a prototype of a GPT2-based FL framework and conduct experiments to demonstrate our perspective. The performance in new knowledge drops by 33.47% compared with the previous data, which justify the FL with defenses strategy can misjudge the new knowledge.
Loading