Novel Topological Shapes of Model InterpretabilityDownload PDF

Published: 31 Oct 2020, Last Modified: 05 May 2023TDA & Beyond 2020 PosterReaders: Everyone
Keywords: Unsupervised Learning, Clustering, Classification, Graph Based Learning, Bioinformatics, Quantitative Finance and Econometrics, Social Networks
TL;DR: Improve data analysis and model interpretability by applying Mapper on charts and enforcing new graph layout constraints.
Abstract: The most accurate models can be the most challenging to interpret. This paper advances interpretability analysis by combining insights from $\texttt{Mapper}$ with recent interpretable machine-learning research. Enforcing new visualization constraints on $\texttt{Mapper}$, we produce a globally - to locally interpretable visualization of the Explainable Boosting Machine. We demonstrate the usefulness of our approach to three data sets: cervical cancer risk, propaganda Tweets, and a loan default data set that was artificially hardened with severe concept drift.
Previous Submission: No
Poster: pdf
1 Reply

Loading