Keywords: honesty, interrogation, alignment auditing
TL;DR: We propose a SFT method that trains models to admit simple factual errors, which generalizes to admitting hidden objectives in sabotage tasks under adversarial pressure to conceal them, improving techniques for incriminating misaligned AI systems.
Abstract: As AI systems become more capable of complex agentic tasks, they also become more capable of pursuing undesirable objectives and causing harm. Previous work has attempted to catch these unsafe instances by interrogating LLMs directly about their objectives and behaviors. However, the main weakness of trusting interrogations is that models can lie. We propose self-report fine-tuning (SRFT), a simple supervised fine-tuning technique that makes a model truthfully disclose its hidden objectives when questioned during interrogation. We train models to self-report factual mistakes in question-answering, and show that trained models are more likely to admit hidden objectives, even in the presence of adversarial pressure to conceal them. We evaluate SRFT in out-of-distribution (OOD) stealth tasks, where the model are instructed to complete a hidden misaligned objective alongside a user-specified objective without being caught by monitoring. Interrogation on SRFT models can detect hidden objectives with near-ceiling performance (F1 score = 0.98), while the baseline model lies when interrogated under the same conditions (F1 score = 0.03). Interrogation on SRFT models can further elicit the content of the hidden objective, recovering 28-100\% details, compared to 0\% details recovered in the baseline model and by the prefilled assistant turn attack. This provides a promising technique for incriminating misaligned AI systems.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 25344
Loading