Keywords: probes, deception, honesty, safety
TL;DR: We introduce a methodology for benchmarking deception probes via black-to-white performance boosts, evaluating probe-based monitoring strategies across diverse deception datasets and finding modest but informative gains alongside key limitations.
Abstract: AI assistants will occasionally respond deceptively to user queries. Recently, linear classifiers (called “deception probes”) have been trained to distinguish the internal activations of a language model during deceptive versus honest responses. However, it’s unclear how effective these probes are at detecting deception in practice, nor whether such probes are resistant to simple counter strategies from a deceptive assistant who wishes to evade detection. In this paper, we compare white-box monitoring (where the monitor has access to token-level probe activations) to black-box monitoring (without such access). We benchmark deception probes by the extent to which the white box monitor outperforms the black-box monitor, i.e. the black-to-white performance boost. We find weak but encouraging black-to-white performance boosts from existing deception probes.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21761
Loading