Byzantine-resilient distributed learning under constraintsDownload PDFOpen Website

2021 (modified: 17 Apr 2023)ACC 2021Readers: Everyone
Abstract: We consider a class of convex distributed statistical learning problems with inequality constraints in an adversarial scenario. At each iteration, an α-fraction of m machines, which are supposed to compute stochastic gradients of the loss function and send them to a master machine, may act adversarially and send faulty gradients. To guard against defective information sharing, we develop a Byzantine primal-dual algorithm. For α ∈ [0,0.5), we prove that after T iterations the algorithm achieves ~O(1/T+1/√{mT}-+α/√T) statistical error bounds on both the optimality gap and the constraint violation. Our result holds for a class of normed vector spaces and, when specialized to the Euclidean space, it attains the optimal error bound for Byzantine stochastic gradient descent.
0 Replies

Loading