Auditing Agents for Adversarial Fine-tuning Detection

ICLR 2026 Conference Submission10359 Authors

18 Sept 2025 (modified: 20 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: adversarial fine-tuning, fine-tuning security, auditing agents, safety, jailbreaking
TL;DR: We introduce fine-tuning auditing agents, which given access to the dataset, fine-tuned model and pre-fine-tuned model, can detect diverse fine-tuning API attack vectors, including covert malicious fine-tuning.
Abstract: Large Language Model (LLM) providers expose fine-tuning APIs that let end users fine-tune their frontier LLMs. Unfortunately, it has been shown that an adversary with fine-tuning access to an LLM can bypass safeguards. Particularly concerning, such attacks may avoid detection with datasets that are only implicitly harmful. Our work studies robust detection mechanisms for adversarial use of fine-tuning APIs. We introduce the concept of a *fine-tuning auditing agent* and show it can detect harmful fine-tuning prior to model deployment. We provide our auditing agent with access to the fine-tuning dataset, as well as the fine-tuned and pre-fine-tuned models, and request the agent assigns a risk score for the fine-tuning job. We evaluate our detection approach on a diverse set of eight strong fine-tuning attacks from the literature, along with five benign fine-tuned models, totaling over 1400 independent audits. These attacks are undetectable with basic content moderation on the dataset flagging less than 0.4% of our examples across our attack datasets, highlighting the challenge of the task. With the best set of affordances, our auditing agent achieves a 56.2% detection rate of adversarial fine-tuning at a 1% false positive rate. Most promising, the auditor is able to detect covert cipher attacks that evade safety evaluations and content moderation of the dataset. While benign fine-tuning with unintentional subtle safety degradation remains a challenge, we establish a baseline configuration for further work in this area.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 10359
Loading