Explaining A Black-box By Using A Deep Variational Information Bottleneck ApproachDownload PDF

Seojin Bang, Pengtao Xie, Heewook Lee, Wei Wu, Eric Xing

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Abstract: Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
Code: https://drive.google.com/open?id=1IHOf9qw1sQ5KNUtHsO6wHGXK1wjcvxxP
Keywords: interpretable machine learning, information bottleneck principle, black-box
Original Pdf: pdf
16 Replies

Loading