AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Privacy Benchmark, AI agents, Privacy Leakages, Data Minimization
TL;DR: introducing a novel privacy benchmark for AI agents that evaluates their adherence to the data minimization principle on full-stack end-to-end environment.
Abstract: Autonomous AI agents that can follow instructions and perform complex multi-step tasks have tremendous potential to boost human productivity. However, to perform many of these tasks, the agents need access to personal information from their users, raising the question of whether they are capable of using it appropriately. In this work, we introduce a new benchmark **AgentDAM** that measures if AI web-navigation agents follow the privacy principle of *"data minimization"*. For the purposes of our benchmark, data minimization means that the agent uses a piece of potentially sensitive information only if it is "necessary" to complete a particular task. Our benchmark simulates realistic web interaction scenarios end-to-end and is adaptable to all existing web navigation agents. We use AgentDAM to evaluate how well AI agents built on top of GPT-4, Llama-3 and Claude can limit processing of potentially private information, and show that they are prone to inadvertent use of unnecessary sensitive information. We also propose a prompting-based defense that reduces information leakage, and demonstrate that our end-to-end benchmarking provides a more realistic measure than probing LLMs about privacy. Our results highlight that further research is needed to develop AI agents that can prioritize data minimization at inference time. We open source our benchmark at: https://github.com/facebookresearch/ai-agent-privacy
Croissant File: json
Dataset URL: https://github.com/facebookresearch/ai-agent-privacy/tree/main/agentdam/data
Code URL: https://github.com/facebookresearch/ai-agent-privacy
Primary Area: Social and economic aspects of datasets and benchmarks in machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 1045
Loading