Keywords: quantization, compression, distributed optimization, federated learning
Abstract: Quantization [Alistarh et al., 2017] is an important (stochastic) compression technique that reduces the volume of transmitted bits during each communication round in distributed model training. Suresh et al. [2022] introduce correlated quantizers and show their advantages over independent counterparts by analyzing distributed SGD communication complexity. We analyze the fore- front distributed non-convex optimization algorithm MARINA [Gorbunov et al., 2022] utilizing the proposed correlated quantizers and show that it outperforms the original MARINA and distributed SGD of Suresh et al. [2022] with regard to the communication complexity. We significantly re- fine the original analysis of MARINA without any additional assumptions using the weighted Hessian variance [Tyurin et al., 2022], and then we expand the theoretical framework of MARINA to accommodate a substantially broader range of potentially correlated and biased compressors, thus dilating the applicability of the method beyond the conventional independent unbiased compressor setup. Extensive experimental results corroborate our theoretical findings.
Latex Source Code: zip
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission143/Authors, auai.org/UAI/2025/Conference/Submission143/Reproducibility_Reviewers
Submission Number: 143
Loading