Auto-encoders for compressed sensingDownload PDF

Published: 21 Oct 2019, Last Modified: 05 May 2023NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
Abstract: Compressed sensing is about recovering a structured high-dimensional signal ${\bf x}\in R^n$ from its under-determined noisy linear measurements ${\bf y}\in R^m$, where $m\ll n$. While the vast majority of the literature in this area is on sparse signals, in recent years, there has been considerable progress on compressed sensing of signals with structures beyond sparsity. One of the promising approaches in this field is to employ generative models that are based on trained neural networks. In this paper, we study the performance of an iterative algorithm based on projected gradient descent that employs an auto-encoder to define and enforce the source structure. The auto-encoder is defined by a generative function $g:R^k\rightarrow R^n$ and a separate neural network that is trained to function as the inverse of $g$. We prove that, for a generative model $g$ with $\ell_2$ representation error $\delta$, given roughly $m>40k\log{1\over \delta}$ measurements, such an algorithm converges, even in the presence of additive white Gaussian noise.
1 Reply

Loading