From Feature Visualization to Visual Circuits: Effect of Model Perturbation

TMLR Paper6243 Authors

17 Oct 2025 (modified: 28 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Understanding the inner workings of large-scale deep neural networks is challenging yet crucial in several high-stakes applications. Mechanistic interpretability is an emergent field that tackles this challenge, often by identifying human-understandable subgraphs in deep neural networks known as circuits. In vision-pretrained models, these subgraphs are typically interpreted by visualizing their node features through a popular technique called feature visualization. Recent works have analyzed the stability of different feature visualization types under the adversarial model manipulation framework. This paper addresses limitations in existing works by proposing a novel attack called ProxPulse that simultaneously manipulates two types of feature visualizations. Surprisingly, when analyzing these attacks within the context of visual circuits, we find that visual circuits exhibit some robustness to ProxPulse. Consequently, we introduce a new attack based on ProxPulse that reveals the manipulability of visual circuits, highlighting their lack of robustness. The effectiveness of these attacks is validated across a range of pre-trained models, from smaller architectures like AlexNet to medium-scale models like ResNet-50, and larger ones such as ResNet-152 and DenseNet-201 on ImageNet.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Satoshi_Hara1
Submission Number: 6243
Loading