Novel Topological Shapes of Model InterpretabilityDownload PDF

Oct 10, 2020 (edited Dec 06, 2020)NeurIPS 2020 Workshop TDA and Beyond Blind SubmissionReaders: Everyone
  • Keywords: Unsupervised Learning, Clustering, Classification, Graph Based Learning, Bioinformatics, Quantitative Finance and Econometrics, Social Networks
  • TL;DR: Improve data analysis and model interpretability by applying Mapper on charts and enforcing new graph layout constraints.
  • Abstract: The most accurate models can be the most challenging to interpret. This paper advances interpretability analysis by combining insights from $\texttt{Mapper}$ with recent interpretable machine-learning research. Enforcing new visualization constraints on $\texttt{Mapper}$, we produce a globally - to locally interpretable visualization of the Explainable Boosting Machine. We demonstrate the usefulness of our approach to three data sets: cervical cancer risk, propaganda Tweets, and a loan default data set that was artificially hardened with severe concept drift.
  • Previous Submission: No
  • Poster: pdf
1 Reply