Abstract: Communication-efficient approaches for decentralized learning rely on the exchange of quantized signals coupled with local updates. In this context, differential quantization is an effective technique to mitigate the negative impact of quantization by leveraging correlations between subsequent iterates. In addition, error-feedback, which consists of incorporating the quantization error into subsequent steps, is a powerful mechanism to compensate for the bias caused by the quantization. On the other hand, in recent years, several methods have been proposed for correcting inherent bias present in decentralized learning implementations. While the theoretical benefits of decentralized bias-correction methods are clear, their practical implementation in real-world networks with constrained communication resources may generally require further investigation. In this work, we propose and study a new decentralized communication-efficient learning approach that blends bias-correction with differential quantization and error-feedback. The results show that, under some general conditions on the quantization noise, and for sufficiently small step-sizes μ, it is possible to keep the estimation errors small (on the order of μ) in steady-state, while maintaining finite bit rates. Simulations are provided to illustrate the theoretical findings.
Loading