Expected Pinball Loss For Quantile Regression And Inverse CDF Estimation

Published: 22 Feb 2024, Last Modified: 22 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We analyze and improve a recent strategy to train a quantile regression model by minimizing an expected pinball loss over all quantiles. Through an asymptotic convergence analysis, we show that minimizing the expected pinball loss can be more efficient at estimating single quantiles than training with the standard pinball loss for that quantile, an insight that generalizes the known deficiencies of the sample quantile in the unconditioned setting. Then, to guarantee a legitimate inverse CDF, we propose using flexible deep lattice networks with a monotonicity constraint on the quantile input to guarantee non-crossing quantiles, and show lattice models can be regularized to the same location-scale family. Our analysis and experiments on simulated and real datasets show that the proposed method produces state-of-the-art legitimate inverse CDF estimates that are likely to be as good or better for specific target quantiles.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=mbyjtOIizu
Changes Since Last Submission: [Author response revision] We made minor edits to the paper in line with reviewer comments, including - Replacing use of "convergence rate" or associated phrases with some variation of asymptotic convergence analysis, asymptotic variance, etc. in order to be more precise. - Improving clarity and fixing typos relating to use of 'model' vs 'estimator', consistency of $f(x, \tau; \theta)$ notation, and so on. - Explicitly writing out inverse CDF of distributions used in examples in Sec 3.2.3 and 3.2.4. - Improving wording of some sentences. [Original resubmission] Since our last submission we have made two major batches of changes. 1. Between the first paper's rejection and this submission - Adding additional context and caveats around our theory's assumptions and contributions, including its assumption of having a well-specified functional form (in Sec 3.2) - Analyzing the effects of misspecification on our method's performance, including how and why our method is still able to deliver improvements and when it can fail (in sec 3.5) - Adding a theoretical analysis of convergence when learning with a Beta distribution over quantiles instead of a Uniform distribution (in sec 3.3) - Clarifying how our theory does and does not handle monotonicity constraints in addition to the core expected pinball loss objective (in Sec 3.2.2) 2. During the author response period of the original submission - Extending our theory to the case of conditional features (in Sec 3.4), noting that this still doesn't cover the fully general conditional inverse CDF problem - Further elaborating on the conditions needed for Theorem 1 to hold, including adding a Lemma with sufficient conditions (in Sec 3.2.2) - Empirically validating our theoretical findings on the Uniform Distribution in Table 1 (in Sec 3.2.4) - Cross-validating over the number of keypoints in our first simulation experiment, with results in Table 2 (in Sec 3.5) - Adding empirical results for the Beta-pinball loss's performance in simulations and real-data experiments (in Sec 3.5, 5.5) - Enriching our discussion of related works in uncertainty and open problems (in Sec 1, 6) - Augmenting our explanation of DLNs and how montonicity-preserving layers can be incorporated into general architectures (in Sec 4)
Code: https://github.com/google-research/google-research/tree/master/quantile_regression
Assigned Action Editor: ~Daniel_M_Roy1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1087
Loading