## Understanding l4-based Dictionary Learning: Interpretation, Stability, and Robustness

25 Sept 2019, 19:24 (modified: 11 Mar 2020, 07:34)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Original Pdf: pdf
Code: https://github.com/hermish/ZMZM-ICLR-2020
Keywords: L4-norm Maximization, Robust Dictionary Learning
TL;DR: We compare the l4-norm based dictionary learning with PCA, ICA and show its stability as well as robustness.
Abstract: Recently, the $\ell^4$-norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem. The simple MSP (matching, stretching, and projection) algorithm proposed by \cite{zhai2019a} has proved surprisingly efficient and effective. This paper aims to better understand this algorithm from its strong geometric and statistical connections with the classic PCA and ICA, as well as their associated fixed-point style algorithms. Such connections provide a unified way of viewing problems that pursue {\em principal}, {\em independent}, or {\em sparse} components of high-dimensional data. Our studies reveal additional good properties of $\ell^4$-maximization: not only is the MSP algorithm for sparse coding insensitive to small noise, but it is also robust to outliers and resilient to sparse corruptions. We provide statistical justification for such inherently nice properties. To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images.
5 Replies