Subgradient Descent Learns Orthogonal DictionariesDownload PDF

Published: 21 Dec 2018, Last Modified: 29 Sept 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can recover orthogonal dictionaries on a natural nonsmooth, nonconvex L1 minimization formulation of the problem, under mild statistical assumption on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among other applications. Preliminary synthetic and real experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
Keywords: Dictionary learning, Sparse coding, Non-convex optimization, Theory
TL;DR: Efficient dictionary learning by L1 minimization via a novel analysis of the non-convex non-smooth geometry.
Code: [![github](/images/github_icon.svg) sunju/ODL_L1](https://github.com/sunju/ODL_L1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/subgradient-descent-learns-orthogonal/code)
14 Replies

Loading