Decentralized Feature-Distributed Optimization for Generalized Linear Models

TMLR Paper410 Authors

05 Sept 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: We consider the ``all-for-one'' decentralized learning problem for generalized linear models. The features of each sample are partitioned among several collaborating agents in a connected network, but only one agent observes the response variables. To solve the regularized empirical risk minimization in this distributed setting, we apply the Chambolle--Pock primal--dual algorithm to an equivalent saddle-point formulation of the problem. The primal and dual iterations are either in closed-form or reduce to coordinate-wise minimization of scalar convex functions. We establish convergence rates for the empirical risk minimization under two different assumptions on the loss funtion (Lipschitz and square root Lipschitz), and show how they depend on the characteristics of the design matrix and the Laplacian of the network.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Peter_Richtarik1
Submission Number: 410
Loading