Towards Hyperparameter Optimization of Sparse Bayesian Learning Based on Stein's Unbiased Risk Estimator

Published: 15 Apr 2024, Last Modified: 09 May 2024Learn to Compress @ ISIT 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: stein unbiased risk estimator, sparse bayesian learning, generalized Gaussian distribution, mean square error
Abstract: Sparse Bayesian Learning (SBL) serves as a sparse signal recovery algorithm in compressed sensing, necessitating estimation of several hyperparameters. These can be optimized using Stein's Unbiased Risk Estimator (SURE), asymptotically equivalent to minimizing Mean Squared Error (MSE). In this paper, we analyze minimum MSE by optimizing hyperparameters via MSE. Additionally, we explore the potential of extending SBL's Gaussian prior to a generalized Gaussian prior by analyzing the Laplacian and uniform priors, which represent two special cases of the generalized Gaussian prior. Through simulation experiments, we observe that the Gaussian prior outperforms others for underestimated and deterministic signals, accurately recovering $0$ with optimal hyperparameters optimized via MSE. For non-zero cases, the uniform prior demonstrates superior performance. Conversely, the Laplacian prior consistently performs worse than the other two cases, with its minimum MSE equivalent to the variance of extrinsic.
Submission Number: 13
Loading