Measuring the Reliability of Causal Probing Methods: Tradeoffs, Limitations, and the Plight of Nullifying Interventions
Keywords: interpretability, probing, causal probing, interventions, mechanistic interpretability, language models
TL;DR: We introduce a framework to evaluate causal probing interventions, finding that there are inherent tradeoffs for each method and nullifying methods are the least reliable.
Abstract: Causal probing aims to analyze foundation models by examining how intervening on their representation of various latent properties impacts their outputs. Recent works have cast doubt on the theoretical basis of several leading causal probing methods, but it has been unclear how to systematically evaluate the effectiveness of these methods in practice. To address this, we formally define and quantify two key causal probing desiderata: *completeness* (how thoroughly the representation of the target property has been transformed) and *selectivity* (how little non-targeted properties have been impacted). We introduce an empirical analysis framework to measure and evaluate these quantities, allowing us to make the first direct comparisons of the reliability of different families of causal probing methods (e.g., linear vs. nonlinear or counterfactual vs. nullifying interventions). We find that: (1) there is an inherent tradeoff between completeness and selectivity; (2) no leading probing method is able to consistently satisfy both criteria at once; (3) methods with more favorable tradeoffs have a more consistent impact on LLM behavior; and (4) nullifying interventions are far less complete than counterfactual interventions, suggesting that nullifying methods may not be an effective approach to causal probing.
Track: Main track
Submitted Paper: Yes
Published Paper: No
Submission Number: 67
Loading