dSTAR: Straggler Tolerant and Byzantine Resilient Distributed SGD

Published: 15 Oct 2024, Last Modified: 29 Dec 2024AdvML-Frontiers 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: byzantine attack, sgd, byzantine resilience, gradient aggregation rule
TL;DR: A new byzantine resilient gradient aggregation rule that is optimized for straggler issue
Abstract: Distributed model training needs to be adapted to challenges such as the straggler effect and Byzantine attacks. When coordinating the training process with multiple computing nodes, ensuring timely and reliable gradient aggregation amidst network and system malfunctions is essential. To tackle these issues, we propose $\textit{dSTAR}$, a lightweight and efficient approach for distributed stochastic gradient descent (SGD) that enhances robustness and convergence. $\textit{dSTAR}$ selectively aggregates gradients by collecting updates from the first $k$ workers to respond, filtering them based on deviations calculated using an ensemble median. This method not only mitigates the impact of stragglers but also fortifies the model against Byzantine adversaries. We theoretically establish that $\textit{dSTAR}$ is $(\alpha, f)$-Byzantine resilient and achieves a linear convergence rate. Empirical evaluations across various scenarios demonstrate that $\textit{dSTAR}$ consistently maintains high accuracy, outperforming other Byzantine-resilient methods that often suffer up to a 40-50\% accuracy drop under attack. Our results highlight $\textit{dSTAR}$ as a robust solution for training models in distributed environments prone to both straggler delays and Byzantine faults.
Submission Number: 9
Loading