Distributed aggregative optimization over directed networks with column-stochasticity

Published: 01 Jan 2025, Last Modified: 16 May 2025J. Frankl. Inst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper introduces a distributed optimization algorithm for distributed aggregative optimization (DAO) problems on directed networks with column-stochastic matrices, referred to as the DACS algorithm. DAO problems, where each agent’s local cost function relies on the aggregation of other agents’ decisions as well as its own, pose significant challenges due to potential imbalances in the underlying interaction network. The DACS algorithm leverages an advanced push-sum protocol to facilitate efficient information aggregation and consensus formation. The algorithm’s convergence is guaranteed by the Lipschitz continuity of the gradient and the strong convexity of the cost functions. Additionally, the utilization of the heavy ball method significantly accelerates the convergence speed of DACS. Numerical simulations across various scenarios, including multi-robot surveillance, optimal placement, and Nash–Cournot games in power systems, demonstrate the algorithm’s convergence and efficiency. Furthermore, testing the algorithm under network disruptions shows that it maintains convergence in both fixed and time-varying networks, proving that as long as connectivity assumptions hold, our algorithm exhibits robustness across a wide range of real-world network environments.
Loading