Abstract: We study the piecewise stationary combinatorial semibandit
problem with causally related rewards. In our nonstationary
environment, variations in the base arms’ distributions, causal relationships
between rewards, or both, change the reward generation
process. In such an environment, an optimal decision-maker must
follow both sources of change and adapt accordingly. The problem
becomes aggravated in the combinatorial semi-bandit setting, where
the decision-maker only observes the outcome of the selected bundle
of arms. The core of our proposed policy is the Upper Confidence
Bound (UCB) algorithm. We assume the agent relies on an adaptive
approach to overcome the challenge. More specifically, it employs
a change-point detector based on the Generalized Likelihood
Ratio test. Besides, we introduce the notion of group restart as a
new alternative restarting strategy in the decision making process in
structured environments. Finally, our algorithm integrates a mechanism
to trace the variations of the underlying graph structure, which
captures the causal relationships between the rewards in the bandit
setting. Theoretically, we establish a regret upper bound that reflects
the effects of the number of structural- and distribution changes on
the performance. The outcome of our numerical experiments in realworld
scenarios exhibits applicability and superior performance of
our proposal compared to the state-of-the-art benchmarks.
Loading