Deep Research Brings Deeper Harm

Published: 29 Sept 2025, Last Modified: 22 Oct 2025NeurIPS 2025 - Reliable ML WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Research, Large Language Models, Red teaming, safety, jailbreak LLMs
TL;DR: We find that Deep Research agents can bring deeper harm and propose two novel jailbreak methods to red-team DR agents.
Abstract: Deep Research (DR) agents built on Large Language Models (LLMs) can perform complex, multi-step research by decomposing tasks, retrieving online information, and synthesizing detailed reports. However, the misuse of LLMs with such powerful capabilities can lead to even greater risks. This is especially concerning in high-stakes and knowledge-intensive domains such as biosecurity, where DR can generate a professional report containing detailed forbidden knowledge. Unfortunately, we have found such risks in practice: simply submitting a harmful query, which a standalone LLM would directly reject, can elicit a detailed and dangerous report from DR agents. This highlights the elevated risks and underscores the need for a deeper safety analysis. Yet, jailbreak methods designed for LLMs fall short in exposing such unique risks, as they do not target the core planning and research scenarios of DR agents. To address this gap, we propose two novel jailbreak strategies: Plan Injection, which removes safety considerations from the agent’s plan and injects malicious sub-goals to induce harmful behavior; and Intent Hijack, which reframes harmful queries as academic-style research questions to bypass alignment defenses. We conducted extensive experiments across different LLMs and various safety benchmarks, including general and biosecurity forbidden prompts. These experiments reveal 3 key findings: (1) Alignment mechanisms of the LLMs often fail in DR agents, where harmful prompts framed in academic terms can hijack agent intent; (2) Multi-step planning and execution weaken the alignment, revealing systemic vulnerabilities that prompt-level safeguards cannot address; (3) DR agents not only bypass refusals but also produce more coherent, professional, and dangerous content, compared with standalone LLMs. These results demonstrate a fundamental misalignment in DR agents and call for better alignment techniques specifically tailored to DR agents.
Submission Number: 45
Loading