Multi-armed Bandits with Missing Outcomes

Published: 07 May 2025, Last Modified: 17 Jun 2025UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: missing data, multi armed bandits, causal inference, online decision making, missingness
TL;DR: We address the challenge of missing outcomes in multi-armed bandit problems, proposing a rigorous framework for both random and non-random missingness to improve decision-making in real-world scenarios.
Abstract: While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, with missing outcomes perhaps among the most challenging ones. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings.
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/ilia-mahrooghi/Multi-armed-Bandits-with-Missing-Outcome
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission811/Authors, auai.org/UAI/2025/Conference/Submission811/Reproducibility_Reviewers
Submission Number: 811
Loading