Keywords: Privacy, Inference, Inference-Based Privacy, Risk, AI Safety
Abstract: This position paper argues that inference-based privacy (IBP) risks, AI systems’ ability to infer sensitive personal information from seemingly innocuous inputs, represent a distinct and urgent threat to privacy that remains critically under-addressed in current AI safety discourse. Unlike traditional privacy violations that involve unauthorized access to known data, IBP risks arise from AI systems' ability to infer private attributes through indirect signals and correlations, even when individuals are not present in training datasets. We show that these risks are not hypothetical: they are already evident in deployed systems, from radiology models inferring protected health attributes to large language models deducing personal demographics from subtle linguistic cues. Existing regulatory and technical frameworks, designed primarily for preventing explicit data leakage, are ill-equipped to address these emergent inference threats. We call on researchers, policymakers, and practitioners to recognize IBP as a distinct and immediate category of AI safety risk, and to develop dedicated strategies in response.
Submission Number: 688
Loading