Abstract: Developers try to evaluate whether an AI system can accomplish malicious tasks before releasing it; for example, they might test whether a model enables cyberoffense, user manipulation, or bioterrorism. In this work, we show that individually testing models for such misuse is inadequate; adversaries can misuse combinations of models even when each individual model is safe. The adversary accomplishes this by first decomposing tasks into subtasks, then solving each subtask with the best-suited model. For example, an adversary might solve challenging-but-benign subtasks with an aligned frontier model, and easy-but-malicious subtasks with a weaker misaligned model. We study two decomposition methods: manual decomposition where a human identifies a natural decomposition of a task, and automated decomposition where a weak model generates benign tasks for a frontier model to solve, then uses the solutions in-context to solve the original task. Using these decompositions, we empirically show that adversaries can create vulnerable code, explicit images, python scripts for hacking, and manipulative tweets at much higher rates with combinations of models than either individual model. Our work suggests that even perfectly-aligned frontier systems enable misuse without ever producing malicious outputs, and that red-teaming efforts should extend beyond single models in isolation.
Lay Summary: Frontier AI systems undergo extensive testing to ensure they cannot accomplish malicious tasks before they are deployed. However, we demonstrate that adversaries can misuse combinations of models even when each individual model is safe, and do this without ever using the frontier model to produce malicious outputs.
Our threat model captures how adversaries can use combinations of models by decomposing malicious tasks. They identify which subtasks require capability (but are benign) versus which require willingness to produce malicious content (but are simple). They then route requests accordingly: frontier models handle complex benign subtasks, while unrestricted weak models complete malicious portions.
We evaluate this threat through two decomposition methods. In manual decomposition, humans identify natural task breakdowns (e.g., generating secure code, then editing it to add vulnerabilities). In automated decomposition, weak models generate related benign tasks, frontier models solve them, and weak models leverage these solutions in-context for the original malicious task. Across malicious code generation, manipulation, and explicit content generation, we find model combinations dramatically outperform individual models—sometimes by a factor of ten. This suggests red-teaming must consider the broader model ecosystem to reliably assess deployment risks.
Primary Area: Social Aspects->Safety
Keywords: safety, misuse, adversary, combining models, attacks, hacking
Submission Number: 5781
Loading