A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann MachinesDownload PDF

Published: 17 Oct 2021, Last Modified: 05 May 2023XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
Keywords: XAI, local explanations, unsupervised ensemble learning, deep neural networks
TL;DR: We apply an unsupervised ensemble learning technique to aggregate feature attribution maps for a more accurate and robust interpretation of deep neural networks.
Abstract: Understanding the results of deep neural networks is an essential step towards wider acceptance of deep learning algorithms. Many approaches address the issue of interpreting artificial neural networks, but often provide divergent explanations. Moreover, different hyperparameters of an explanatory method can lead to conflicting interpretations. In this paper, we propose a technique for aggregating the feature attributions of different explanatory algorithms using Restricted Boltzmann Machines (RBMs) to achieve a more reliable and robust interpretation of deep neural networks. Several challenging experiments on real-world datasets show that the proposed RBM method outperforms popular feature attribution methods and basic ensemble techniques.
0 Replies