Abstract: In federated learning multiple clients collaboratively train a global machine learning model by exchanging their locally trained model weights instead of raw data. In the standard setting, every client trains its local model for the same number of epochs. We introduce ALT (Adaptive Local Training), a simple yet effective feedback mechanism that can be introduced on top of any federated learning scheme at the client side to limit unnecessary and degrading computations. ALT dynamically adjusts the number of training epochs for each client based on the similarity between the local representation and the global one, ensuring that well-aligned clients can train longer without experiencing client drift while in case of too large drifts the training is stopped earlier. We evaluated ALT on federated partitions of the CIFAR-10 and Tiny-ImageNet datasets, demonstrating its effectiveness in improving both model convergence speed and accuracy. The code is available at https://github.com/LTTM/ALT.
External IDs:dblp:conf/eusipco/ShenajBZ25
Loading