Superquantile-Based Learning: A Direct Approach Using Gradient-Based OptimizationDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 17 May 2023J. Signal Process. Syst. 2022Readers: Everyone
Abstract: We consider a formulation of supervised learning that endows models with robustness to distributional shifts from training to testing. The formulation hinges upon the superquantile risk measure, also known as the conditional value-at-risk, which has shown promise in recent applications of machine learning and signal processing. We show that, thanks to a direct smoothing of the superquantile function, a superquantile-based learning objective is amenable to gradient-based optimization, using batch optimization algorithms such as gradient descent or quasi-Newton algorithms, or using stochastic optimization algorithms such as stochastic gradient algorithms. A companion software SPQR implements in Python the algorithms described and allows practitioners to experiment with superquantile-based supervised learning.
0 Replies

Loading