Distributionally Robust Policy Learning under Concept Drifts

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a minimax optimal offline policy learning algorithm that is robust under concept drifts.
Abstract: Distributionally robust policy learning aims to find a policy that performs well under the worst-case distributional shift, and yet most existing methods for robust policy learning consider the worst-case *joint* distribution of the covariate and the outcome. The joint-modeling strategy can be unnecessarily conservative when we have more information on the source of distributional shifts. This paper studies a more nuanced problem --- robust policy learning under the *concept drift*, when only the conditional relationship between the outcome and the covariate changes. To this end, we first provide a doubly-robust estimator for evaluating the worst-case average reward of a given policy under a set of perturbed conditional distributions. We show that the policy value estimator enjoys asymptotic normality even if the nuisance parameters are estimated with a slower-than-root-$n$ rate. We then propose a learning algorithm that outputs the policy maximizing the estimated policy value within a given policy class $\Pi$, and show that the sub-optimality gap of the proposed algorithm is of the order $\kappa(\Pi)n^{-1/2}$, where $\kappa(\Pi)$ is the entropy integral of $\Pi$ under the Hamming distance and $n$ is the sample size. A matching lower bound is provided to show the optimality of the rate. The proposed methods are implemented and evaluated in numerical studies, demonstrating substantial improvement compared with existing benchmarks.
Lay Summary: Most of the current robust offline policy learning literature adopts the joint-modeling strategy, which can be unnecessarily conservative when we have more information on the source of distributional shifts. We study the policy learning problem under concept drift, and develop a minimax optimal policy learning algorithm. Our methodology efficiently learns a policy with optimal worst-case average performance under concept drift, and can be extended to a more general setting where there is an additional identifiable covariate shift.
Link To Code: https://github.com/off-policy-learning/concept-drift-robust-learning
Primary Area: General Machine Learning->Causality
Keywords: distributionally robust optimization, offline policy learning, concept drift, bandit learning, reinforcement learning.
Submission Number: 8736
Loading