Accelerated Methods with Compression for Horizontal and Vertical Federated Learning

Published: 20 Sept 2024, Last Modified: 01 Oct 2024ICOMP PublicationEveryoneRevisionsBibTeXCC BY 4.0
Keywords: convex optimization, compression, distributed optimization, acceleration, horizontal and vertical data partitioning
Abstract: Distributed optimisation algorithms have emerged as a superior approach to solving applied problems, including the training of machine learning models. To accommodate the diverse ways in which data can be stored across devices, these methods must be adaptable to a wide range of situations. At the uppermost level, two orthogonal regimes of distributed algorithms are distinguished: horizontal and vertical. Nevertheless, irrespective of the manner in which data is distributed among workers, communication between them can become a critical bottleneck during parallel training, particularly in the case of high-dimensional and over-parameterised models. Therefore, it is crucial to enhance current methods with strategies that minimize the amount of data transmitted during training while still achieving a model of similar quality. This paper introduces two accelerated algorithms with various compressors, working in the regime of horizontal and vertical data division. By adapting variance reduction technique of non-distributed methods for stochastics, caused by compression, we were able to achieve one of the best asymptotics for the horizontal case. Additionally, we provide one of the first theoretical convergence guarantees for the vertical regime. In our experiments, we demonstrate superior practical performance compared to other popular approaches.
Submission Number: 29
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview