Understanding Distributional Ambiguity via Non-robust Chance ConstraintDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Abstract: We propose a non-robust interpretation of the distributionally robust optimization (DRO) problem by relating the impact of uncertainties around the distribution on the impact of constraining the objective through tail probabilities. Our interpretation allows utility maximizers to understand the size of the ambiguity set through parameters that are directly linked to the chance parameters. We first show that for general $\phi$-divergences, a DRO problem is asymptotically equivalent to a class of mean-deviation problem, where the ambiguity radius controls investor's risk preference. Based on this non-robust reformulation, we then show that when a boundedness constraint is added to the investment strategy, the DRO problem can be cast as a chance-constrained optimization (CCO) problem without distributional uncertainties. Without the boundedness constraint, the CCO problem is shown to perform uniformly better than the DRO problem, irrespective of the radius of the ambiguity set, the choice of the divergence measure, or the tail heaviness of the center distribution. Besides the widely-used Kullback-Leibler (KL) divergence which requires the distribution of the objective function to be exponentially bounded, our results apply to divergence measures that accommodate well heavy tail distributions such as the student $t$-distribution and the lognormal distribution. Comprehensive testing on synthetic data and real data are provided.
Code: https://github.com/RobustInterpretation/Understanding-Distributional-Ambiguity-via-Non-robust-Chance-Constraint
Keywords: Heavy tail distribution, Chance constraint, Distributionally robust optimization
Original Pdf: pdf
4 Replies

Loading