Modelling the influence of data structure on learning in neural networksDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Neural Networks, Generative models, Synthetic data sets, Generalisation, Stochastic Gradient descent
TL;DR: We demonstrate how structure in data sets impacts neural networks and introduce a generative model for synthetic data sets that reproduces this impact.
Abstract: The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer networks trained on two different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, and (ii) a simple canonical data set such as MNIST images. Our analysis reveals two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets. Second, we introduce a generative model for data sets, where high-dimensional inputs lie on a lower-dimensional manifold and have labels that depend only on their position within this manifold. We call it the *hidden manifold model* and we experimentally demonstrate that training networks on data sets drawn from this model reproduces both the phenomena seen during training on MNIST.
Code: https://drive.google.com/file/d/1L0UOtOoRTYSHZtTxMxKIQuZLEuVaoJl_/view?usp=sharing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1909.11500/code)
Original Pdf: pdf
10 Replies

Loading