Abstract: Online distributed optimization is particularly useful for solving optimization problems with streaming data collected by multiple agents over a network. When the solutions lie on a Riemannian manifold, such problems become challenging to solve, particularly when efficiency and continuous adaptation are required. This work tackles these challenges and devises a diffusion adaptation strategy for decentralized optimization over general manifolds. A theoretical analysis shows that the proposed algorithm is able to approach network agreement after sufficient iterations, which allows a non-asymptotic convergence result to be derived. We apply the algorithm to the online decentralized principal component analysis problem and Gaussian mixture model inference. Experimental results with both synthetic and real data illustrate its performance.
Lay Summary: How can a network of devices, like sensors, robots, or phones, work together to solve complex problems using data that keeps arriving over time? This question becomes even harder when the solutions aren’t simple numbers or vectors, but lie on curved spaces called manifolds, which are common in real-world applications like machine learning, signal processing, and control.
In this paper, we introduce a new method that helps multiple devices cooperatively learn from data as it streams in, without relying on a central server. Our method allows each device to update its understanding continuously and also communicate with its neighborhood to stay in agreement. Over time, the devices all move toward a shared solution.
We back up our method with mathematical guarantees, showing that it converges reliably. We also test it on two common learning tasks: finding patterns in big data and grouping similar pieces of information. In both cases, it works well, even with real-world data, proving that this kind of teamwork among devices is both possible and powerful.
Link To Code: https://github.com/xiuheng-wang/diffusion_manifold_release
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: Riemannian optimization, decentralized optimization, intrinsic method, diffusion adaptation, stochastic gradient descent, network agreement, non-asymptotic convergence, principle component analysis, Gaussian mixture model
Submission Number: 10178
Loading