Abstract: Sparse Bayesian Learning (SBL) is a popular sparse signal recovery method, and various algorithms exist under the SBL paradigm. In this paper, we introduce a novel re-parameterization that allows the iterations of existing algorithms to be viewed as special cases of a unified and general mapping function. Furthermore, the re-parameterization enables an interesting beamforming interpretation that lends insights to all the considered algorithms. Utilizing the abstraction allowed by the general mapping viewpoint, we introduce a novel neural network architecture for learning improved iterative update rules under the SBL framework. Our modular design of the architecture enables the model to be independent of the size of the measurement matrix and provides us a unique opportunity to test the generalization capabilities across different measurement matrices. We show that the network when trained on a particular parameterized dictionary generalizes in many ways hitherto not possible; different measurement matrices, both type and dimension, and number of snapshots. Our numerical results showcase the generalization capability of our network in terms of mean square error and probability of support recovery across sparsity levels, different signal-to-noise ratios, number of snapshots and multiple measurement matrices of different sizes.
External IDs:dblp:conf/icassp/BalajiCR25
Loading