Evaluating Contrast Localizer for Identifying Causal Units in Social & Mathematical Tasks in Language Models
Keywords: AI interpretability, Cognitive Neuroscience
TL;DR: We test whether neuroscience-inspired contrast localizer methods can reliably identify task-specific units in language and vision-language models.
Abstract: This work adapts a neuroscientific contrast localizer to pinpoint causally relevant units for Theory of Mind (ToM) and mathematical reasoning tasks in large language models (LLMs) and vision-language models (VLMs). Across 11 LLMs and 5 VLMs ranging in size from 3B to 90B parameters, we localize top-activated units using contrastive task pairs and assess their causal role via targeted ablations. We compare the effect of lesioning functionally selected units against low-activation and randomly selected units on downstream accuracy across established ToM and mathematical benchmarks. Contrary to expectations, low-activation units sometimes produced larger performance drops than the highly activated ones, and units derived from the mathematical localizer often impaired ToM performance more than those from the ToM localizer. These findings call into question the specificity of contrast-based methods and highlight the need for broader stimulus sets and more accurately capture task-specific units.
Public: Yes
Track: Lessons-Learned
Submission Number: 5
Loading