Much Easier Said Than Done: Falsifying the Causal Relevance of Decoding MethodsDownload PDF

Published: 06 Dec 2022, Last Modified: 10 Nov 2024ICBINB posterReaders: Everyone
Keywords: ablation, representation, semantic decoding, linear decoding, representation similarity, neural network interpretability, activation space, causal testing
TL;DR: We show that linear classifier probes, commonly used for interpretability, do not isolate causal function in convolutional neural networks.
Abstract: Linear classifier probes are frequently utilized to better understand how neural networks function. Researchers have approached the problem of determining unit importance in neural networks by probing their learned, internal representations. Linear classifier probes identify highly selective units as the most important for network function. Whether or not a network actually relies on high selectivity units can be tested by removing them from the network using ablation. Surprisingly, when highly selective units are ablated they only produce small performance deficits, and even then only in some cases. In spite of the absence of ablation effects for selective neurons, linear decoding methods can be effectively used to interpret network function, leaving their effectiveness a mystery. To falsify the exclusive role of selectivity in network function and resolve this contradiction, we systematically ablate groups of units in subregions of activation space. Here, we find a weak relationship between neurons identified by probes and those identified by ablation. More specifically, we find that an interaction between selectivity and the average activity of the unit better predicts ablation performance deficits for groups of units in Alexnet, VGG16, MobileNetV2, and ResNet101. Linear decoders are likely somewhat effective because they overlap with those units that are causally important for network function. Interpretability methods could be improved by focusing on causally important units.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/much-easier-said-than-done-falsifying-the/code)
0 Replies

Loading