Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Critical Percolation as a Framework to Analyze the Training of Deep Networks
Nov 03, 2017 (modified: Dec 12, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to $128$ layers.
TL;DR:A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks
Keywords:Deep Convolutional Networks, Loss function landscape, Graph Structured Data, Training Complexity, Theory of deep learning, Percolation theory, Anderson Localization
Enter your feedback below and we'll get back to you as soon as possible.