Fast Optimization With Zeroth-Order Feedback in Distributed, Multi-User MIMO SystemsDownload PDFOpen Website

2020 (modified: 05 Nov 2022)IEEE Trans. Signal Process. 2020Readers: Everyone
Abstract: In this paper, we develop a gradient-free optimization methodology for efficient resource allocation in Gaussian MIMO multiple access channels. Our approach combines two main ingredients: (i) an entropic semidefinite optimization based on matrix exponential learning (MXL); and (ii) a one-shot gradient estimator which achieves low variance through the reuse of past information. This novel algorithm, which we call gradient-free MXL with callbacks (MXL0 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> ), retains the convergence speed of gradient-based methods while requiring minimal feedback per iteration-a single scalar. In more detail, in a MIMO multiple access channel with K users and M transmit antennas per user, the gradient-free MXL with callbacks (MXL0 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> ) algorithm achieves ε-optimality within poly(K, M)/ε <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> iterations (on average and with high probability), even when implemented in a fully distributed, asynchronous manner. For cross-validation, we also perform a series of numerical experiments in medium to large-scale MIMO networks under realistic channel conditions. Throughout our experiments, the performance of MXL0 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> matches-and sometimes exceeds-that of gradient-based MXL methods, all the while operating with a vastly reduced communication overhead. In view of these findings, the MXL0 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">+</sup> algorithm appears to be uniquely suited for distributed massive MIMO systems where gradient calculations can become prohibitively expensive.
0 Replies

Loading