USING FUNCTION SPACE THEORY FOR UNDERSTANDING INTERMEDIATE LAYERSDownload PDF

29 Jan 2018ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: The representational change of input along the intermediate layers is an important aspect of understanding deep learning architectures. To this end, we propose an approach that relies on the foundation of Function Space theory. In particular, we argue that a weak-type Besov smoothness index can quantify the geometry of the clustering in the feature space of each layer. Therefore, our approach may provide an additional perspective for understanding data-models fit in the setting of deep learning. While using a different framework and perspective, the experiments we performed are in line with the results described by Tishby & Zaslavsky (2015) and Montavon et al. (2010) in the sense that for well-performing trained networks, the quality of the representation increases from layer to layer. Our approach could also be used for addressing generalization (Zhang et al., 2016), (Kawaguchi et al., 2017) as we also show that the Besov smoothness of the layer representations of the training set decreases as we add more mis-labeling.
TL;DR: We propose a Function Space theory approach, that describes the change of the input along the intermediate layers in deep learning architectures
Keywords: deep learning, representation layers, Function Space, wavelets, approximation, Besov smoothness
4 Replies

Loading