Learning-Augmented Algorithms for MTS with Bandit Access to Multiple Predictors

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We obtain tight regret bounds for combining heuristics for MTS in a bandit setting.
Abstract: Combining algorithms is one of the key techniques in learning-augmented algorithms. We consider the following problem: We are given $\ell$ heuristics for Metrical Task Systems (MTS), where each might be tailored to a different type of input instances. While processing an input instance received online, we are allowed to query the action of only one of the heuristics at each time step. Our goal is to achieve performance comparable to the best of the given heuristics. The main difficulty of our setting comes from the fact that the cost paid by a heuristic at time $t$ cannot be estimated unless the same heuristic was also queried at time $t-1$. This is related to Bandit Learning against memory bounded adversaries (Arora et al., 2012). We show how to achieve regret of $O(\text{OPT}^{2/3})$ and prove a tight lower bound based on the construction of Dekel et al. (2013).
Lay Summary: Many complex tasks in power management, routing, production systems, and investing are characterized by a need for efficient and dynamic decision-making in the context of limited information and switching costs. One needs to make choices sequentially, without knowing what comes in the future and without being able to change the decisions already taken. Machine learning models are capable of producing informative predictions about the future, however these predictions may be arbitrarily poor. Moreover, obtaining predictions may be expensive both from a computational and financial perspective. Our task is to create an algorithm that combines the advice received from a portfolio of predictors in such away as to perform as well as the best one, while only asking for the advice of one of the models at each time. We provide such an algorithm and show that it is optimal for a wide range of problems. Our work can be used, for example, to improve the performance of data centers or to make computers faster by improving memory efficiency.
Primary Area: Theory->Online Learning and Bandits
Keywords: Metrical Task System, Bandits, Learning-Augmented Algorithms
Submission Number: 10900
Loading