Keywords: agents, security, prompt injections, web agents
TL;DR: A benchmark with realistic security scenarios for web agents based on LLMs
Abstract: Autonomous UI agents powered by AI have tremendous potential to boost human productivity by automating routine tasks such as filing taxes and paying bills. However, a major challenge in unlocking their full potential is security, which is exacerbated by the agent's ability to take action on their user's behalf. Existing tests for prompt injections in web agents either over-simplify the threat by testing unrealistic scenarios or giving the attacker too much power, or look at single-step isolated tasks. To more accurately measure progress for secure web agents, we introduce WASP – a new publicly available benchmark for end-to-end evaluation of Web Agent Security against Prompt Injection attacks. Evaluating with WASP shows that even top-tier AI models, including those with advanced reasoning capabilities, can be deceived by simple, low-effort human-written injections in very realistic scenarios. Our end-to-end evaluation reveals a previously unobserved insight: while attacks partially succeed in up to 86% of the case, even state-of-the-art agents often struggle to fully complete the attacker goals – highlighting the current state of security by incompetence. Code and data are available at https://github.com/facebookresearch/wasp.
Croissant File: json
Dataset URL: https://github.com/facebookresearch/wasp/tree/main/webarena_prompt_injections/configs/croissant
Code URL: https://github.com/facebookresearch/wasp
Primary Area: Social and economic aspects of datasets and benchmarks in machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Flagged For Ethics Review: true
Submission Number: 945
Loading