Zeroth-Order Method for Distributed Optimization With Approximate ProjectionsDownload PDFOpen Website

2016 (modified: 04 Nov 2022)IEEE Trans. Neural Networks Learn. Syst. 2016Readers: Everyone
Abstract: This paper studies the problem of minimizing a sum of (possible nonsmooth) convex functions that are corresponding to multiple interacting nodes, subject to a convex state constraint set. Time-varying directed network is considered here. Two types of computational constraints are investigated in this paper: one where the information of gradients is not available and the other where the projection steps can only be calculated approximately. We devise a distributed zeroth-order method, the implementation of which needs only functional evaluations and approximate projection. In particular, we show that the proposed method generates expected function value sequences that converge to the optimal value, provided that the projection errors decrease at appropriate rates.
0 Replies

Loading