Hypothesis classes with a unique persistence diagram are NOT nonuniformly learnableDownload PDF

Oct 10, 2020 (edited Aug 05, 2021)NeurIPS 2020 Workshop TDA and Beyond Blind SubmissionReaders: Everyone
  • Keywords: Topological data analysis, statistical learning theory, topological loss functions
  • TL;DR: We (don't) show that hypothesis classes with a unique persistence diagram are nonuniformly learnable.
  • Abstract: *We have since shown that these results are incorrect. Please see PDF for details* Persistence-based summaries are increasingly integrated into deep learning through topological loss functions or regularisers. The implicit role of a topological term in a loss function is to restrict the class of functions in which we are learning (the hypothesis class) to those with a specific topology. Although doing so has had empirical success, to the best of our knowledge there exists no result in the literature that theoretically justifies this restriction. Given a binary classifier in the plane with a Morse-like decision boundary, we prove that the hypothesis class defined by restricting the topology of the possible decision boundaries to those with a unique persistence diagram results in a nonuniformly learnable class of functions. In doing so, we provide a statistical learning theoretic justification for the use of persistence-based summaries in loss functions.
  • Previous Submission: No
  • Poster: pdf
1 Reply

Loading