Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
TL;DR: We establish nearly sharp thresholds for the algorithmic sample complexity of independent component analysis, an unsupervised learning algorithm that learns filters similar to deep convolutional networks.
Abstract: Deep neural networks learn structured features from complex, non-Gaussian inputs, but the mechanisms behind this process remain poorly understood.
Our work is motivated by the observation that the first-layer filters learnt by deep convolutional neural networks from natural images resemble those learnt by independent component analysis (ICA), a simple unsupervised method that seeks the most non-Gaussian projections of its inputs.
This similarity suggests that ICA provides a simple, yet principled model for studying feature learning.
Here, we leverage this connection to investigate the interplay between data structure and optimisation in feature learning for the most popular ICA algorithm, FastICA, and stochastic gradient descent (SGD), which is used to train deep networks.
We rigorously establish that FastICA requires at least $n\gtrsim d^4$ samples to recover a single non-Gaussian direction from $d$-dimensional inputs on a simple synthetic data model. We show that vanilla online SGD outperforms FastICA, and prove that the optimal sample complexity $n\gtrsim d^2$ can be reached by smoothing the loss, albeit in a data-dependent way. We finally demonstrate the existence of a search phase for FastICA on ImageNet, and discuss how the strong non-Gaussianity of said images compensates for the poor sample complexity of FastICA.
Lay Summary: How do neural networks learn to "see"? Unlike classical machine learning methods, neural networks automatically learn image-processing filters from data -- a key advantage over classical methods, but one that remains poorly understood. The main difficulty for the analysis is that neural networks exploit complex patterns that cannot be captured by simple averages or pair-wise relations between pixels, while existing theories can at most account for pair-wise relations, captured by Gaussian distributions.
In this paper, we investigate feature learning by analysing a simpler method, Independent Component Analysis (ICA). ICA seeks to find the most non-Gaussian projections of inputs -- and finds similar filters as neural networks.
Comparing the two most popular algorithms for ICA when inputs are large, we find that FastICA requires a lot more data than is theoretically required, while stochastic gradient descent (with a small modification) only requires the theoretical minimum of samples for learning. Since we make some simplifying assumptions on the data, we corroborate our findings in experiments with real images.
Our results improve our understanding of how to learn efficiently from the complex correlations in real data, and we provide new technical tools for future analysis.
Primary Area: Theory
Keywords: independent component analysis, stochastic gradient descent, FastICA
Submission Number: 7862
Loading