Online Optimization With Predictions and Switching Costs: Fast Algorithms and the Fundamental LimitDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 16 May 2023IEEE Trans. Autom. Control. 2021Readers: Everyone
Abstract: This article considers online optimization with a finite prediction window of cost functions and additional switching costs on the decisions. We study the fundamental limits of dynamic regret of any online algorithm for both the with-prediction and the no-prediction cases. Besides, we propose two gradient-based online algorithms: receding horizon gradient descent (RHGD) and receding horizon accelerated gradient (RHAG); and provide their regret upper bounds. RHAG's regret upper bound is close to the lower bound, indicating the tightness of our lower bound and that our RHAG is near-optimal. Finally, we conduct numerical experiments to complement the theoretical results.
0 Replies

Loading