Keywords: learning-augmented algorithm, multiple experts, primal-dual algorithm, covering problem
TL;DR: We present competitive algorithms in the new dynamic benchmark for 0-1 online covering problems. The algorithms are also within a constant factor of the best-known guarantees in the worst-case benchmark.
Abstract: Designing online algorithms with machine learning predictions is a recent approach that extends beyond the worst-case paradigm for various practically relevant online problems, such as scheduling, caching, and clustering. While most previous learning-augmented algorithms focus on integrating the predictions of a single oracle, we study the design of online algorithms with \emph{multiple} prediction sources (experts). To go beyond the performance guarantee of the popular static best expert in hindsight benchmark, we introduce a new benchmark that can be viewed as a linear combination of predictions that evolve over time.
We present competitive algorithms in the new dynamic benchmark for $0$-$1$ online covering problems with a performance guarantee of $O(\log K)$ if the objective is linear and $O(\ln(K)) \cdot \frac{\lambda}{(1-\mu\ln(K))}$ if the objective is non-linear, where $K$ is the number of experts and $(\lambda, \mu)$ are parameters of the objective function. Our approach gives a new perspective on combining multiple algorithms in an online manner (a central subject in the online algorithm research community) using machine learning techniques.
Submission Number: 158
Loading