A sparse null code emerges in deep neural networks

Published: 02 Nov 2023, Last Modified: 18 Dec 2023UniReps OralEveryoneRevisionsBibTeX
Keywords: Vision Transformers, Convolutional Networks, Interpretability, Sparsity, Attention
TL;DR: We observe and characterize a novel phenomena where a single sparse activation pattern seems to indicate the absence of relevant features
Abstract: The internal representations of deep vision models are often assumed to encode specific image features, such as contours, textures, and object parts. However, it is possible for deep networks to learn highly abstract representations that may not be linked to any specific image feature. Here we present evidence for one such abstract representation in transformers and modern convolutional architectures that appears to serve as a null code, indicating image regions that are non-diagnostic for the object class. These null codes are both statistically and qualitatively distinct from the more commonly reported feature-related codes of vision models. Specifically, these null codes have several distinct characteristics: they are highly sparse, they have a single unique activation pattern for each network, they emerge abruptly at intermediate network depths, and they are activated in a feature-independent manner by weakly informative image regions, such as backgrounds. Together, these findings reveal a new class of highly abstract representations in deep vision models: sparse null codes that seem to indicate the absence of relevant features.
Track: Proceedings Track
Submission Number: 74