Abstract: In this study, we investigate distributed optimization through the lens of linearly constrained optimization problems and analyze the loopless projection stochastic approximation (LPSA) method. LPSA incorporates a probabilistic projection, denoted as p n , during the n-th iteration to ensure random feasibility. We set ${p_n} \propto \eta _n^\beta $, where η n represents the step size and β is a tuning parameter. Our previous research demonstrates that LPSA exhibits a fascinating bias-variance trade-off through diffusion approximation in the asymptotic regime. Specifically, for β < 0.5, the bias becomes degenerate, while for β > 0.5, the bias dominates the variance. In this work, we investigate the intricate scenario where β = 0.5 and discover that the last iterate, after appropriate scaling, weakly converges to a biased Gaussian distribution. As a result, we provide a comprehensive asymptotic analysis of LPSA and a complete characterization of phase transitions. We observe that a non-zero bias leads to slow convergence. Motivated by this observation, we propose a debiased version of LPSA, called Debiased LPSA (DLPSA), which effectively reduces projection complexity compared to the vanilla LPSA.
Loading