The Missing Curve Detectors of InceptionV1: Applying Sparse Autoencoders to InceptionV1 Early Vision

Published: 24 Jun 2024, Last Modified: 31 Jul 2024ICML 2024 MI Workshop SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: mechanistic interpretability, InceptionV1, sparse autoencoders, superposition, vision interpretability
TL;DR: Sparse autoencoders work for InceptionV1 early vision.
Abstract: Recent work on sparse autoencoders (SAEs) has shown promise in extracting interpretable features from neural networks and addressing challenges with polysemantic neurons caused by superposition. In this paper, we apply SAEs to the early vision layers of InceptionV1, a well-studied convolutional neural network, with a focus on curve detectors. Our results demonstrate that SAEs can uncover new interpretable features not apparent from examining individual neurons, including additional curve detectors that fill in previous gaps. We also find that SAEs can decompose some polysemantic neurons into more monosemantic constituent features. These findings suggest SAEs are a valuable tool for understanding InceptionV1, and convolutional neural networks more generally.
Submission Number: 59
Loading