Derivatives and residual distribution of regularized M-estimators with application to adaptive tuningDownload PDF

21 May 2021 (modified: 05 May 2023)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: M-estimators, regularization, robustness, adaptive parameter tuning, p/n->const, Huber loss, Elastic-Net
Abstract: This paper studies M-estimators with gradient-Lipschitz loss function regularized with convex penalty in linear models with Gaussian design matrix and arbitrary noise distribution. A practical example is the robust M-estimator constructed with the Huber loss and the Elastic-Net penalty and the noise distribution has heavy-tails. Our main contributions are three-fold. (i) We provide general formulae for the derivatives of regularized M-estimators $\hat{\boldsymbol{\beta}}(\boldsymbol{y},\boldsymbol{X})$ where differentiation is taken with respect to both $\boldsymbol{y}$ and $\boldsymbol{X}$; this reveals a simple differentiability structure shared by all convex regularized M-estimators. (ii) Using these derivatives, we characterize the distribution of the residual $r_i = y_i-\boldsymbol{x}_i^\top\hat{\boldsymbol{\beta}}$ in the intermediate high-dimensional regime where dimension and sample size are of the same order. (iii) Motivated by the distribution of the residuals, we propose a novel adaptive criterion to select tuning parameters of regularized M-estimators. The criterion approximates the out-of-sample error up to an additive constant independent of the estimator, so that minimizing the criterion provides a proxy for minimizing the out-of-sample error. The proposed adaptive criterion does not require the knowledge of the noise distribution or of the covariance of the design. Simulated data confirms the theoretical findings, regarding both the distribution of the residuals and the success of the criterion as a proxy of the out-of-sample error. Finally our results reveal new relationships between the derivatives of $\hat{\boldsymbol{\beta}}(\boldsymbol{y},\boldsymbol{X})$ and the effective degrees of freedom of the M-estimators, which are of independent interest.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: Theoretical findings on the behavior of robust, regularized M-estimators as p/n->const: differentiability, distribution of residuals and adaptive parameter tuning
Supplementary Material: zip
13 Replies

Loading