Keywords: Federated Learning, Decentralized Optimization, Communication Compression, Quantization
TL;DR: We propose Log-Bit Distributed Learning with Harmonic Modulation, which compresses high-dimensional updates into log-bit transmissions, enabling provable convergence and drastically reduced communication.
Abstract: We consider distributed learning over a communication graph where decentralized clients, as local data owners, exchange information only with their neighbors to train a system-level model, making communication complexity a critical factor. To mitigate this complexity, we introduce a communication quantization scheme based on Harmonic Modulation, in which high-dimensional vectors are compressed and quantized prior to transmission, thereby substantially reducing communication overhead. Building on this idea, we propose Log-Bit Gradient Descent with Harmonic Modulation, where each sender compresses a $d$-dimensional vector into a single scalar, quantizes it into an $m$-bit binary code, and transmits it to the receivers for decoding. Under a sufficient condition, our method achieves an $\mathcal{O}(1/t)$ convergence rate, where $t$ denotes the number of iterations. Moreover, we establish a conservative lower bound showing that only $\log_2(\mathcal{O}(d))$ bits per communication are required, with $d$ representing the vector dimension. Experimental results on synthetic quadratic optimization and logistic regression validate the effectiveness of our approach. In particular, for logistic regression, our method reaches the same target accuracy while using nearly 800× fewer bits per iteration and almost two orders of magnitude less total communication compared to baseline methods.
Primary Area: optimization
Submission Number: 17857
Loading