Keywords: Privacy protection, Inferential Privacy, Embodied Agents
TL;DR: Privacy-conscious decision-making is a critical and under-explored challenge for embodied agents.
Abstract: The reasoning capabilities of embodied agents introduce a critical, under-explored inferential privacy challenge, where the risk of an agent generate sensitive conclusions from ambient data. This capability creates a fundamental tension between an agent's utility and user privacy, rendering traditional static controls ineffective. To address this, we propose a framework that reframes privacy as a dynamic learning problem grounded in theory of Contextual Integrity (CI). Our approach enables agents to proactively learn and adapt to individual privacy norms through interaction, outlining a research agenda to develop embodied agents that are both capable and function as trustworthy safeguards of user privacy.
Submission Type: Position/Review Paper (4-9 Pages)
NeurIPS Resubmit Attestation: This submission is not a resubmission of a NeurIPS 2025 submission.
Submission Number: 158
Loading