Keywords: Distributional Shift, Distributionally Robust Optimization, Robust Satisficing, Generalization Upper Bound
Abstract: Distributional shifts commonly arise in practice when the target environment differs from the source environment that provides training data.
Robust learning frameworks such as Distributionally Robust Optimization (DRO) and Robust Satisficing (RS) have been developed to address this challenge, yet their theoretical behavior under such shifts remains insufficiently understood.
This paper analyzes their performance under distributional shifts measured by the Wasserstein distance, focusing on the generalization error defined as the excess loss in the target environment.
We derive the first generalization error bounds that explicitly characterize how DRO and RS balance improved robustness in the target environment with the regularization cost of robustness, while avoiding the curse of dimensionality.
When partial shift information such as magnitude or direction is available, we conduct a systematic comparison of both methods and provide theory-based guidelines for selecting between them, supported by simulation results.
Finally, we demonstrate the practical relevance of our framework through an application to a network lot-sizing problem.
This work fills theoretical gaps in robust learning under distributional shifts and provides practical guidance for algorithm design.
Submission Number: 225
Loading