Smoothed Online Convex Optimization Based on Discounted-Normal-PredictorDownload PDF

Published: 31 Oct 2022, Last Modified: 09 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Smoothed Online Convex Optimization, Adaptive regret with switching cost, Dynamic regret with switching cost, Discounted-Normal-Predictor
TL;DR: Based on Discounted-Normal-Predictor, we develop a simple algorithm for smoothed online convex optimization, which attains (nearly) optimal adaptive regret and dynamic regret with switching cost.
Abstract: In this paper, we investigate an online prediction strategy named as Discounted-Normal-Predictor [Kapralov and Panigrahy, 2010] for smoothed online convex optimization (SOCO), in which the learner needs to minimize not only the hitting cost but also the switching cost. In the setting of learning with expert advice, Daniely and Mansour [2019] demonstrate that Discounted-Normal-Predictor can be utilized to yield nearly optimal regret bounds over any interval, even in the presence of switching costs. Inspired by their results, we develop a simple algorithm for SOCO: Combining online gradient descent (OGD) with different step sizes sequentially by Discounted-Normal-Predictor. Despite its simplicity, we prove that it is able to minimize the adaptive regret with switching cost, i.e., attaining nearly optimal regret with switching cost on every interval. By exploiting the theoretical guarantee of OGD for dynamic regret, we further show that the proposed algorithm can minimize the dynamic regret with switching cost in every interval.
Supplementary Material: pdf
12 Replies

Loading