Lower Bounds for Compressed Sensing with Generative ModelsDownload PDF

Published: 21 Oct 2019, Last Modified: 05 May 2023NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
TL;DR: Lower bound for compressed sensing w/ generative models that matches known upper bounds
Keywords: lower bounds, compressed sensing, sparsity
Abstract: The goal of compressed sensing is to learn a structured signal $x$ from a limited number of noisy linear measurements $y \approx Ax$. In traditional compressed sensing, ``structure'' is represented by sparsity in some known basis. Inspired by the success of deep learning in modeling images, recent work starting with~\cite{BDJP17} has instead considered structure to come from a generative model $G: \R^k \to \R^n$. We present two results establishing the difficulty of this latter task, showing that existing bounds are tight. First, we provide a lower bound matching the~\cite{BDJP17} upper bound for compressed sensing from $L$-Lipschitz generative models $G$. In particular, there exists such a function that requires roughly $\Omega(k \log L)$ linear measurements for sparse recovery to be possible. This holds even for the more relaxed goal of \emph{nonuniform} recovery. Second, we show that generative models generalize sparsity as a representation of structure. In particular, we construct a ReLU-based neural network $G: \R^{2k} \to \R^n$ with $O(1)$ layers and $O(kn)$ activations per layer, such that the range of $G$ contains all $k$-sparse vectors.
1 Reply

Loading