A Practical Approach to Fostering Trust in AI via Semantically Supported Accountability

Published: 02 Jan 2025, Last Modified: 03 Mar 2025AAAI 2025 Workshop AIGOV PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI trustworthiness, AI governance, semantic web, operations, lessons learned
TL;DR: We present a practical view on operationalizing the transparency and accountability aspects of AI governance via semantic technologies.
Abstract: In this demonstration (or poster), we will be presenting a walkthrough of a practical approach to elevating trust in artificial intelligence (AI) enabled systems using semantically grounded approaches for improved transparency and well-defined accountability. Within the larger context of the responsible use of AI, this investigation seeks to demonstrate actionable steps toward that goal while surfacing lessons learned and challenges faced along the way. We leverage semantic techniques and tools, including ontologies and automated reasoning, to operationalize the communication of accountability and the transparency information throughout the design, deployment, and ongoing evaluation of AI-enabled systems. A simplified use case related to the development of a clinical decision support system (CDSS) will be the backdrop for this investigation.
Submission Number: 22
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview