From Regulation to Compliance: Expert Views on Aligning Explainable AI with the EU AI Act

AAAI 2026 Workshop AIGOV Submission31 Authors

21 Oct 2025 (modified: 26 Nov 2025)AAAI 2026 Workshop AIGOV SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Regulations, Explainable AI, AI Compliance, Semi-structured Interviews
TL;DR: Interviews with domain experts reveal that explainability under the EU AI Act is context-dependent and hindered by regulatory vagueness, underscoring the need for domain-specific and user-centered XAI to bridge technical–compliance gaps.
Abstract: Explainable AI (XAI) aims to support people who interact with high-stakes, AI-driven decisions, and the EU AI Act requires that users can appropriately interpret high-risk AI system outputs (Article 13) and that human oversight prevents undue reliance (Article 14). Yet the Act offers little technical guidance on implementing explainability, leaving interpretability methods difficult to operationalize and compliance obligations unclear. To address these gaps, we interviewed eight domain experts across legal, compliance, and technical roles to explore (1) how explainability is defined and perceived under the Act, (2) the practical and regulatory obstacles to XAI implementation, and (3) recommended solutions and future directions. Our findings reveal that domain experts view explainability as context- and audience-dependent, face challenges from regulatory vagueness and technical trade-offs, and advocate for domain-specific rules, hybrid methods, and user-centered explanations. These insights provide a basis for a potential framework to align XAI methods with regulatory requirements and governance compliance, and suggest actionable steps for policymakers and practitioners.
Submission Number: 31
Loading