Hypothesis classes with a unique persistence diagram are NOT nonuniformly learnableDownload PDF

Published: 31 Oct 2020, Last Modified: 05 May 2023TDA & Beyond 2020 SpotlightReaders: Everyone
Keywords: Topological data analysis, statistical learning theory, topological loss functions
TL;DR: We (don't) show that hypothesis classes with a unique persistence diagram are nonuniformly learnable.
Abstract: *We have since shown that these results are incorrect. Please see PDF for details* Persistence-based summaries are increasingly integrated into deep learning through topological loss functions or regularisers. The implicit role of a topological term in a loss function is to restrict the class of functions in which we are learning (the hypothesis class) to those with a specific topology. Although doing so has had empirical success, to the best of our knowledge there exists no result in the literature that theoretically justifies this restriction. Given a binary classifier in the plane with a Morse-like decision boundary, we prove that the hypothesis class defined by restricting the topology of the possible decision boundaries to those with a unique persistence diagram results in a nonuniformly learnable class of functions. In doing so, we provide a statistical learning theoretic justification for the use of persistence-based summaries in loss functions.
Previous Submission: No
Poster: pdf
1 Reply

Loading