A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann MachinesDownload PDF

21 Sept 2021, 14:51 (edited 14 Nov 2021)XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
  • Keywords: XAI, local explanations, unsupervised ensemble learning, deep neural networks
  • TL;DR: We apply an unsupervised ensemble learning technique to aggregate feature attribution maps for a more accurate and robust interpretation of deep neural networks.
  • Abstract: Understanding the results of deep neural networks is an essential step towards wider acceptance of deep learning algorithms. Many approaches address the issue of interpreting artificial neural networks, but often provide divergent explanations. Moreover, different hyperparameters of an explanatory method can lead to conflicting interpretations. In this paper, we propose a technique for aggregating the feature attributions of different explanatory algorithms using Restricted Boltzmann Machines (RBMs) to achieve a more reliable and robust interpretation of deep neural networks. Several challenging experiments on real-world datasets show that the proposed RBM method outperforms popular feature attribution methods and basic ensemble techniques.
0 Replies

Loading