Nash: Neural Adaptive Shrinkage for Structured High-Dimensional Regression

18 Sept 2025 (modified: 26 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Empirical Bayes, variational inference, split inference, high dimensional, penalized regression, penalty learning
TL;DR: We propose Neural Adaptive Shrinkage (Nash), a sparse regression method that uses neural networks to learn covariate-specific penalties that we fit using a novel variational method that is very efficient.
Abstract: Sparse linear regression is a fundamental tool in data analysis. However, traditional approaches often fall short when covariates exhibit structure or arise from heterogeneous sources. In biomedical applications, covariates may stem from distinct modalities or be structured according to an underlying graph. We introduce Neural Adaptive Shrinkage (Nash), a unified framework that integrates covariate-specific side information into sparse regression via neural networks. Nash adaptively modulates penalties on a per-covariate basis, learning to tailor regularization without cross-validation. We develop a variational inference algorithm for efficient training and establish connections to empirical Bayes regression. Experiments on real data demonstrate Nash’s improved accuracy and adaptability over existing methods.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 12031
Loading