AI Kill Switch for Malicious Web-based LLM Agents

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, LLM Agents, AI Safety, AI Kill Switch
TL;DR: We introduce an AI Kill Switch methods that halts malicious web-based LLM agents by embedding defensive prompts into websites.
Abstract: Recently, web-based Large Language Model (LLM) agents autonomously per- form increasingly complex tasks, thereby bringing significant convenience. How- ever, they also amplify the risks of malicious misuse cases such as unauthorized collection of personally identifiable information (PII), generation of socially di- visive content, and even automated web hacking. To address these threats, we propose an AI Kill Switch technique that can immediately halt the operation of malicious web-based LLM agents. To achieve this, we introduce AutoGuard – the key idea is generating defensive prompts that trigger the safety mechanisms of malicious LLM agents. In particular, generated defense prompts are transpar- ently embedded into the website’s DOM so that they remain invisible to human users but can be detected by the crawling process of malicious agents, triggering its internal safety mechanisms to abort malicious actions once read. To evaluate our approach, we constructed a dedicated benchmark consisting of three repre- sentative malicious scenarios. Experimental results show that AutoGuard achieves over 80% Defense Success Rate (DSR) across diverse malicious agents, including GPT-4o, Claude-4.5-Sonnet and generalizes well to advanced models like GPT- 5.1, Gemini-2.5-Flash, and Gemini-3-Pro. Also, our approach demonstrates robust defense performance in real-world website environments without significant per- formance degradation for benign agents. Through this research, we demonstrate the controllability of web-based LLM agents, thereby contributing to the broader effort of AI control and safety.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24479
Loading