Track: Research Track
Keywords: online learning, data-dependent bounds, constraints
Abstract: This paper initiates the study of \emph{data-dependent} regret bounds in \emph{constrained} MAB settings.
These are bounds that depend on the sequence of losses that characterize the problem instance.
Thus, in principle they can be much smaller than classical $\widetilde{\mathcal{O}}(\sqrt{T})$ regret bounds, while being equivalent to them in the worst case.
Despite this, data-dependent regret bounds have been completely overlooked in constrained MABs.
The goal of this paper is to answer the question: \emph{Can data-dependent regret bounds be derived in the presence of constraints?}
We provide an affirmative answer in constrained MABs with {adversarial} losses and {stochastic} constraints.
Specifically, our main focus is on the most challenging and natural settings with \emph{hard constraints}, where the learner must ensure that the constraints are always satisfied with high probability.
We design an algorithm with a regret bound consisting of \emph{two} data-dependent terms.
The first one captures the difficulty of satisfying the constraints, while the second one encodes the complexity of learning independently of their presence.
We also prove a lower bound showing that these two terms are \emph{not} artifacts of our specific approach and analysis, but rather the fundamental components that inherently characterize the problem complexity.
Finally, in designing our algorithm, we also derive some novel results in the related (and easier) \emph{soft constraints} settings, which may be of independent interest.
Submission Number: 4
Loading