Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic TopologyDownload PDF

Published: 21 Dec 2018, Last Modified: 14 Oct 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data. Measures for characterizing and monitoring structural properties, however, have not been developed. In this work, we propose neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs. To demonstrate the usefulness of our approach, we show that neural persistence reflects best practices developed in the deep learning community such as dropout and batch normalization. Moreover, we derive a neural persistence-based stopping criterion that shortens the training process while achieving comparable accuracies as early stopping based on validation loss.
Keywords: Algebraic topology, persistent homology, network complexity, neural network
TL;DR: We develop a new topological complexity measure for deep neural networks and demonstrate that it captures their salient properties.
Code: [![github](/images/github_icon.svg) BorgwardtLab/Neural-Persistence](https://github.com/BorgwardtLab/Neural-Persistence) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=ByxkijC5FQ)
Data: [Fashion-MNIST](https://paperswithcode.com/dataset/fashion-mnist)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/neural-persistence-a-complexity-measure-for/code)
12 Replies

Loading