Eliciting Latent Knowledge from Quirky Language Models

ICLR 2024 Workshop ME-FoMo Submission3 Authors

Published: 04 Mar 2024, Last Modified: 04 May 2024ME-FoMo 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Eliciting Latent Knowledge, Scalable Oversight, Interpretability
TL;DR: We finetune LMs to produce systematically incorrect output on 12 datasets and find that we can recover representations of truth more reliable than LM output using minimal supervision.
Abstract: Eliciting Latent Knowledge (ELK) aims to find patterns in a capable neural network's activations which robustly track the true state of the world, even when the network's overt output is false or misleading. To further ELK research, we introduce 12 datasets and a corresponding suite of "quirky" language models that are LoRA finetuned to make systematic errors when answering questions if and only if the keyword "Bob" is present in the prompt. We demonstrate that simple probing methods can elicit the model's latent knowledge of the correct answer in these contexts, even for problems harder than those the probe was trained on. This is enabled by context-independent knowledge representations located in middle layer activations. We also find that a mechanistic anomaly detection approach can flag untruthful behavior with 94% AUROC. Our results show promise for eliciting reliable knowledge from capable but untrusted models, and facilitates future research empirically investigating ELK methods.
Submission Number: 3
Loading