Sparse Bayesian Generative Modeling for Compressive Sensing

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Compressive sensing, variational inference, sparse bayesian learning, variational autoencoder, Gaussian mixture model, generative model
TL;DR: In this work, a new type of sparsity inducing generative prior for compressive sensing is introduced.
Abstract: This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior. Our proposed method utilizes ideas from classical dictionary-based CS and, in particular, sparse Bayesian learning (SBL), to integrate a strong regularization towards sparse solutions. At the same time, by leveraging the notion of conditional Gaussianity, it also incorporates the adaptability from generative models to training data. However, unlike most state-of-the-art generative models, it is able to learn from a few compressed and noisy data samples and requires no optimization algorithm for solving the inverse problem. Additionally, similar to Dirichlet prior networks, our model parameterizes a conjugate prior enabling its application for uncertainty quantification. We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.
Primary Area: Probabilistic methods (for example: variational inference, Gaussian processes)
Submission Number: 19046
Loading