Abstract: We study the non‐contextual multi‐armed bandit problem in a transfer learning setting: before any pulls, the learner is given $N'_k$ i.i.d.\ samples from each source distribution $\nu'_k$, and the true target distributions $\nu_k$ lie within a known distance bound $d_k(\nu_k,\nu'_k)\le L_k$. In this framework, we first derive a problem‐dependent asymptotic lower bound on cumulative regret that extends the classical Lai–Robbins result to incorporate the transfer parameters $(d_k,L_k,N'_k)$. We then propose \textsc{KL-UCB-Transfer}, a simple index policy that matches this new bound in the Gaussian case. Finally, we validate our approach via simulations, showing that \textsc{KL-UCB-Transfer} significantly outperforms the no‐prior baseline when source and target distributions are sufficiently close.
Supplementary Material: zip
Submission Number: 84
Loading