Robust Recourse via Kernel Distributionally Robust Optimization and Bayesian Posterior Predictive Modeling

TMLR Paper7311 Authors

03 Feb 2026 (modified: 06 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Machine learning recourse provides actionable recommendations to achieve favorable outcomes from predictive decision models. A critical limitation of current approaches is their reliance on the assumption of model stationarity, an assumption that is frequently violated in dynamic, real-world settings with distributional shifts. Robust approaches such as Robust Algorithmic Recourse (ROAR) and the Wasserstein-based DiRRAc address some uncertainties but remain limited in handling nonlinear dependencies and large-scale shifts, including concept drift and adversarial perturbations. We propose Kernel Distributionally Robust Recourse Action (KDRRA), a framework that defines ambiguity sets using Maximum Mean Discrepancy (MMD) in a Reproducing Kernel Hilbert Space (RKHS), enabling flexible, nonparametric modeling of complex, nonlinear discrepancies between distributions. A practical challenge for kernel DRO is that empirical kernel mean embeddings can deviate from the true distribution, inflating ambiguity radii and yielding overly conservative recommendations. To address this, we introduce Bayesian KDRRA (BKDRRA), which centers the ambiguity set on a Bayesian posterior predictive distribution constructed via posterior bootstrap. This Bayesian centering integrates sampling variability and moderate model uncertainty into the reference distribution, leading to tighter ambiguity sets and markedly lower conservatism without sacrificing robustness. Leveraging the representer theorem, we derive finite-dimensional convex reformulations of the worst-case recourse optimization for both KDRRA and BKDRRA. We conduct a comprehensive empirical evaluation across three real-world datasets that exhibit correction, temporal, and geospatial shifts. The KDRRA consistently outperforms state-of-the-art baselines in yielding superior robustness and lower recourse cost, while BKDRRA further improves stability and calibration by integrating Bayesian uncertainty. Our research advances the frontier of distributionally robust recourse by integrating machine learning tools and optimization, offering reliable and resilient decision-making under uncertainty.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Samuel_Vaiter1
Submission Number: 7311
Loading