Sequential Off-Policy Learning with Logarithmic Smoothing

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose two novel sequential off-policy algorithms based on logarithmic smoothing, both suggested by PAC-Bayes bounds and enjoy convergence guarantee to the optimal policy.
Abstract: Off-policy learning enables training policies from logged interaction data. Most prior work considers the batch setting, where a policy is learned from data generated by a single behavior policy. In real systems, however, policies are updated and redeployed repeatedly, each time training on all previously collected data while generating new interactions for future updates. This sequential off-policy learning setting is common in practice but remains largely unexplored theoretically. In this work, we present and study a simple algorithm for \emph{sequential off-policy learning}, combining Logarithmic Smoothing (\texttt{LS}) estimation with online PAC-Bayesian tools. We further show that a principled adjustment to \texttt{LS} improves performance and accelerates convergence under mild conditions. The algorithms introduced generalize previous work: they match state-of-the-art offline approaches in the batch case and substantially outperforms them when policies are updated sequentially. Empirical evaluations highlight both the benefits of the sequential framework and the strength of the proposed algorithms.
Submission Number: 786
Loading