Abstract: This work presents a novel approach to low-rank matrix factorization in a federated learning context, where multiple clients collaboratively solve a matrix decomposition problem without sharing their local data. The algorithm introduces a power initialization technique for the global factorization matrix and combines it with local gradient descent updates to achieve strong theoretical and practical guarantees. Considering this power initialization, we rewrite the previous smooth non-convex problem into a smooth strongly-convex problem that we solve using a parallel Nesterov gradient descent potentially requiring a single step of communication at the initialization step. We provide a linear rate of convergence of the excess loss, our results improve the rates of convergence given in the literature. We provide an upper bound on the Frobenius-norm error of reconstruction under the power initialization strategy. We complete our analysis with experiments on both synthetic and real data.
Loading