Beyond Black-Box Advice: Learning-Augmented Algorithms for MDPs with Q-Value Predictions

Published: 21 Sept 2023, Last Modified: 19 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Time-varying MDP, Learning-augmented online algorithm, consistency and robustness tradeoff
TL;DR: We study if Q-value advice from an untrusted machine-learned policy provides more benefits for the consistency and robustness tradeoff than the black-box advice in the context of learning-augmented online algorithms for MDPs.
Abstract: We study the tradeoff between consistency and robustness in the context of a single-trajectory time-varying Markov Decision Process (MDP) with untrusted machine-learned advice. Our work departs from the typical approach of treating advice as coming from black-box sources by instead considering a setting where additional information about how the advice is generated is available. We prove a first-of-its-kind consistency and robustness tradeoff given Q-value advice under a general MDP model that includes both continuous and discrete state/action spaces. Our results highlight that utilizing Q-value advice enables dynamic pursuit of the better of machine-learned advice and a robust baseline, thus result in near-optimal performance guarantees, which provably improves what can be obtained solely with black-box advice.
Supplementary Material: pdf
Submission Number: 6602
Loading