Dictionary Learning Under Generative Coefficient Priors with Applications to CompressionDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Dictionary learning, generative priors, sparsity, alternating minimization, linear transformation, transfer learning, compression, algorithms
Abstract: There is a rich literature on recovering data from limited measurements under the assumption of sparsity in some basis, whether known (compressed sensing) or unknown (dictionary learning). In particular, classical dictionary learning assumes the given dataset is well-described by sparse combinations of an unknown basis set. However, this assumption is of limited validity on real-world data. Recent work spanning theory and computational science has sought to replace the canonical sparsity assumption with more complex data priors, demonstrating how to incorporate pretrained generative models into frameworks such as compressed sensing and phase retrieval. Typically, the dimensionality of the input space of the generative model is much smaller than that of the output space, paralleling the “low description complexity,” or compressibility, of sparse vectors. In this paper, we study dictionary learning under this kind of known generative prior on the coefficients, which may capture non-trivial low-dimensional structure in the coefficients. This is a distributional learning approach to compression, in which we learn a suitable dictionary given access to a small dataset of training instances and a specified generative model for the coefficients. Equivalently, it may be viewed as transfer learning for generative models, in which we learn a new linear layer (the dictionary) to fine-tune a pretrained generative model (the coefficient prior) on a new dataset. We give, to our knowledge, the first provable algorithm for recovering the unknown dictionary given a suitable initialization. Finally, we compare our approach to traditional dictionary learning algorithms on synthetic compression and denoising tasks, demonstrating empirically the advantages of incorporating finer-grained structure than sparsity.
One-sentence Summary: We present a new model for dictionary learning under a generative coefficient prior, which strictly generalizes the traditional assumption of sparse coefficients, and provide a provable local algorithm for recovering the unknown dictionary.
Supplementary Material: zip
5 Replies

Loading