How well does Persistent Homology generalize on graphs?

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Generalization Bounds, Persistence Homology, Topological Data Analysis, Graph Representation Learning, Learning Theory
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Persistent Homology (PH) combined with neural networks is good for graph-based predictions. We use PAC-Bayesian analysis to provide guarantees for neural network-based persistence layers (PersLay) on graphs.
Abstract: Persistent Homology (PH) is one of the pillars of topological data analysis that leverages multiscale topological descriptors to extract meaningful features from data. More recently, the combination of PH and neural networks has been successfully used to tackle predictive tasks on graphs. However, the generalization capabilities of PH on graphs remain largely unexplored. We derive a PAC-Bayesian perturbation analysis to bridge this gap. Specifically, we introduce the first data-dependent generalization guarantees for neural network-based persistence layers (PersLay). Notably, PersLay consists of a general framework that subsumes various vectorization methods of persistence diagrams in the literature. We substantiate our theoretical analysis with experimental studies and provide insights about the generalization of PH on real-world graph classification benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7697
Loading