The Elicitation Game: Evaluating Capability Elicitation Techniques

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We train language models which hide their capabilities unless given a password and use these models to evaluate capability elicitation techniques based on prompting, fine-tuning, and activation steering.
Abstract: Capability evaluations are required to understand and regulate AI systems that may be deployed or further developed. Therefore, it is important that evaluations provide an accurate estimation of an AI system’s capabilities. However, in numerous cases, previously latent capabilities have been elicited from models, sometimes long after initial release. Accordingly, substantial efforts have been made to develop methods for eliciting latent capabilities from models. In this paper, we evaluate the effectiveness of capability elicitation techniques by intentionally training model organisms – language models with hidden capabilities that are revealed by a password. We introduce a novel method for training model organisms, based on circuit-breaking, which is more robust to elicitation techniques than standard password-locked models. We focus on elicitation techniques based on prompting and activation steering, and compare these to fine-tuning methods. Prompting techniques can elicit the actual capability of both password-locked and circuit- broken model organisms in an MCQA setting, while steering fails to do so. For a code-generation task, only fine-tuning can elicit the hidden capabilities of our novel model organism. Additionally, our results suggest that combining techniques improves elicitation. Still, if possible, fine-tuning should be the method of choice to improve the trustworthiness of capability evaluations.
Lay Summary: # Testing Hidden AI Capabilities: Can We Find What AI Systems Are Really Capable Of? When we build AI systems, we need to understand what they can actually do – both for safety and to make good decisions about how to use them. However, AI systems sometimes have hidden abilities that only show up later, which can be a problem if we've underestimated what they're capable of. To address this challenge, researchers have developed techniques to try to uncover these hidden capabilities. But how well do these techniques actually work? This study tested different methods by creating special AI models with intentionally hidden abilities – like having a secret password that unlocks extra skills. We created two types of "model organisms" (test AI systems): simple password-protected models and more sophisticated ones using a technique called "circuit-breaking" that makes the hidden abilities harder to find. We then tried various methods to uncover these hidden capabilities, including different prompting strategies, a technique called activation steering, and retraining the models. The results showed that simple prompting could reveal hidden abilities in multiple-choice question tasks, but steering techniques failed. For programming tasks, only retraining the AI model could uncover the hidden capabilities of our more sophisticated test models. Combining multiple techniques generally worked better than using just one. These findings suggest that if we want to be confident about what an AI system can do, we should use multiple evaluation methods, with retraining being the most reliable approach. This research helps make AI capability testing more trustworthy and thorough.
Link To Code: https://github.com/ Felhof/sandbagging-elicitation
Primary Area: Social Aspects->Alignment
Keywords: evaluations, capability elicitation, model organisms
Submission Number: 10569
Loading