Provable Identifiability of ReLU Neural Networks via Lasso RegularizationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Lasso, nonlinear regression, model selection
Abstract: LASSO regularization is a popular regression tool to enhance the prediction accuracy of statistical models by performing variable selection through the $\ell_1$ penalty, initially formulated for the linear model and its variants. In this paper, the territory of LASSO is extended to the neural network model, a fashionable and powerful nonlinear regression model. Specifically, given a neural network whose output $y$ depends only on a small subset of input $\boldsymbol{x}$, denoted by $\mathcal{S}^{\star}$, we prove that the LASSO estimator can stably reconstruct the neural network and identify $\mathcal{S}^{\star}$ when the number of samples scales logarithmically with the input dimension. This challenging regime has been well understood for linear models while barely studied for neural networks. Our theory lies in an extended Restricted Isometry Property (RIP)-based analysis framework for two-layer ReLU neural networks, which may be of independent interest to other LASSO or neural network settings. Based on the result, we further propose a neural network-based variable selection method. Experiments on simulated and real-world datasets show the promising performance of our variable selection approach compared with classical techniques.
One-sentence Summary: We theoretically show that the Lasso estimator can stably identify ReLU neural networks and then propose to use neural networks as vehicles to perform variable selection.
Supplementary Material: zip
7 Replies

Loading