Finding Relevant Information in Saliency Related Neural Networks

Published: 27 Oct 2023, Last Modified: 05 Dec 2023InfoCog@NeurIPS2023 PosterEveryoneRevisionsBibTeX
Keywords: saliency prediction, mutual information, XAI, Explainable AI
TL;DR: Measuring information flow in saliency prediction networks
Abstract: Over the last few years, many saliency models have shifted to using Deep Learning (DL). DL models can be viewed in this context as a double-edged sword. On the one hand, they boost estimation performance and on the other hand have less explanatory power than more explicit models. This drop in explanatory power is why DL models are often dubbed implicit models. Explainable AI (XAI) techniques have been formulated to address this shortfall. They work by extracting information from the network and explaining it. Here, we demonstrate the effectiveness of the Relevant Information Approach in accounting for saliency networks. We apply this approach to saliency models based on explicit algorithms when represented as neural networks. These networks are known to contain relevant information in their neurons. We estimate the relevant information of each neuron by capturing the relevant information with respect to first layer features (intensity, red, blue) and its higher-level manipulations. We measure relevant information by using Mutual Information (MI) between quantified features and the label. These experiments were conducted on a subset of the CAT2000 dataset.
Submission Number: 37
Loading