ContextBench: Modifying Contexts for Targeted Latent Activation and Behaviour Elicitation

Published: 30 Sept 2025, Last Modified: 16 Nov 2025Mech Interp Workshop (NeurIPS 2025) PosterEveryoneRevisionsBibTeXCC BY 4.0
Open Source Links: https://github.com/lasr-eliciting-contexts/ContextBench
Keywords: AI Safety, Interpretability tooling and software, Sparse Autoencoders
Other Keywords: Prompt Optimisation, Elicitation, Feature Visualisation
TL;DR: This paper motivates the AI safety case for generating targeted, linguistically fluent inputs that activate specific latent features or elicit model behaviours, and introduces a benchmark for methods that do this task.
Abstract: Identifying inputs that trigger specific behaviours or latent features in language models could have a wide range of safety use cases. We investigate a class of methods capable of generating targeted, linguistically fluent inputs that activate specific latent features or elicit model behaviours. We formalise this approach as *context modification* and present ContextBench - a benchmark with tasks designed to assess the capabilities of context modification methods across core capabilities and potential safety applications. Our evaluation framework measures both elicitation strength (the degree to which latent features or behaviours are successfully elicited) and linguistic fluency, highlighting how current state-of-the-art methods struggle to balance these objectives. We develop two novel enhancements to Evolutionary Prompt Optimisation (EPO): LLM-assistance and diffusion model inpainting, achieving state-of-the-art performance in balancing elicitation and fluency.
Submission Number: 61
Loading