On the Explainability of Convolutional Layers for Multi-Class ProblemsDownload PDF

Published: 19 Jan 2022, Last Modified: 05 May 2023CLeaR-Workshop PosterReaders: Everyone
Keywords: neural-symbolic, neuro-symbolic, explainable AI, XAI, ERIC, SRAE, rule extraction, explainability
TL;DR: We compare two methods of extracting sentence-like explanations of the behaviour of trained CNNs and explore conditions under which both may be applied to multi-class problems
Abstract: Neuro-symbolic reasoning systems support the goal of making the behaviour of trained neural networks more explainable. ERIC and SRAE are two such methods for CNNs that are similar in that they both provide decompositional, layer-wise explanations that can be extracted post-hoc and deployed as classifiers in their own right. However the two methods differ in how they represent knowledge and reason over those representations; ERIC reduces the layer's behaviour to a discrete logic program for symbolic reasoning over a vocabulary at most as large as the number of kernels in that layer; and SRAE reduces the layer's output to more limited but concise vocabulary represented by a set of continuous, orthogonal and sparse features. We compare both methods and show that despite these differences they yield similar results with respect to fidelity when deployed as approximations of the original CNN. SRAE offers marginally stronger fidelity than ERIC but in sacrificing some fidelity ERIC is able to offer a larger and more discrete set of symbols that more closely match what individual kernels actually see. Neither method has previously been demonstrated on multi-class problems but we show for the first time that under certain conditions they may yield high fidelity in such cases. However for both methods fidelity drops for those multi-class datasets in which images have less distinct edges. Similar results under different representations suggest challenges for layer-wise knowledge extraction in general and invite further investigation from the neuro-symbolic community, with our results offering an early benchmark for such research.
Supplementary Material: pdf
0 Replies

Loading