How to Explain Neural Networks: A perspective of data space divisionDownload PDFOpen Website

2021 (modified: 02 Dec 2022)CoRR 2021Readers: Everyone
Abstract: The lack of interpretability has hindered the large-scale adoption of AI technologies. However, the fundamental idea of interpretability, as well as how to put it into practice, remains unclear. We provide notions of interpretability based on approximation theory in this study. We first implement this approximation interpretation on a specific model (fully connected neural network) and then propose to use MLP as a universal interpreter to explain arbitrary black-box models. Extensive experiments demonstrate the effectiveness of our approach.
0 Replies

Loading