Privacy Challenges in Conversational AI: Three Use Cases and Prospects for Decentralised Data Governance with Solid

Published: 09 Mar 2026, Last Modified: 09 Mar 2026SoSy2026-Privacy PaperEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: privacy
TL;DR: converstaional AI privacy use cases, proposing solid as architecture
Abstract: Conversational AI systems accumulate intimate, longitudinal records of user interactions -- including personal disclosures, intellectual property, health information, and private reflections -- under conditions of structural privacy asymmetry. This paper presents three empirically grounded use cases documenting privacy failures in centralised conversational AI platforms: (1) an assurance gap between verbal confidentiality promises and documented policy, (2) unverifiable data access by AI agents operating through external service connectors, and (3) undocumented scope creep through unannounced screen-sharing capabilities. For each use case, we analyse the privacy failure, map it to relevant regulatory frameworks (GDPR, EU AI Act), and assess how the Solid protocol's decentralised architecture -- user-controlled Pods, granular access control, and linked data standards -- could address the identified gaps. We draw on prior work including SocialGenPod and the W3C Data Privacy Vocabulary (DPV) to contextualise our proposal, while identifying limitations that decentralisation alone cannot resolve. We frame these use cases as a research agenda for the Solid community, proposing extensions including Pod-based conversation storage, standardised audit logging, and machine-readable consent records for AI interactions.
Submission Number: 2
Loading