Abstract: The adoption of machine learning algorithms, especially in critical domains often encounters obstacles related to the lack of their interpretability. In this paper we discuss the methods producing local explanations being either counterfactuals or rules. However, choosing the most appropriate explanation method and one of the generated explanations is not an easy task. Instead of producing only a single explanation, the creation of a set of diverse solutions by a specialized ensemble of explanation methods is proposed. Large sets of these explanations are filtered out by using the dominance relation. Then, the most compromise explanations are searched with a multi-criteria selection method. The usefulness of these approaches is shown in two experimental studies carried out with counterfactuals or rule explanations, respectively.
Loading