Abstract: There is a rich and growing literature on producing local contrastive/counterfactual
explanations for black-box models (e.g. neural networks). In these methods, for
an input, an explanation is in the form of a contrast point differing in very few
features from the original input and lying in a different class. Other works try to
build globally interpretable models like decision trees and rule lists based on the
data using actual labels or based on the black-box models predictions. Although
these interpretable global models can be useful, they may not be consistent with
local explanations from a specific black-box of choice. In this work, we explore
the question: Can we produce a transparent global model that is simultaneously
accurate and consistent with the local (contrastive) explanations of the black-box
model? We introduce a natural local consistency metric that quantifies if the local
explanations and predictions of the black-box model are also consistent with the
proxy global transparent model. Based on a key insight we propose a novel method
where we create custom boolean features from sparse local contrastive explanations
of the black-box model and then train a globally transparent model on just these,
and showcase empirically that such models have higher local consistency compared
with other known strategies, while still being close in performance to models that
are trained with access to the original data
0 Replies
Loading