Conquer the Quantile: Convolution-Smoothed Quantile Regression with Neural Networks and Minimax Guarantees

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: quantile regression, minimax rate, convolution, deep learning theory, Besov space
Abstract: Quantile regression provides a flexible approach to modeling heterogeneous effects and tail behaviors. This paper introduces the first quantile neural network estimator built upon the \textbf{con}volution-type smoothing \textbf{qu}antil\textbf{e} \textbf{r}egression (known as \textit{conquer}) framework, which preserves both convexity and differentiability while retaining the robustness of the quantile loss. Extending the conquer estimator beyond linear models, we develop a nonparametric deep learning framework and establish sharp statistical guarantees. Specifically, we show that our estimator attains the minimax convergence rate over Besov spaces up to a logarithmic factor, matching the fundamental limits of nonparametric quantile estimation, and further derive general upper bounds for the estimation error in more general function classes. Empirical studies demonstrate that our method consistently outperforms existing quantile networks in both estimating accuracy and computational efficiency, underscoring the benefits of incorporating conquer into deep quantile learning.
Primary Area: learning theory
Submission Number: 7858
Loading