User Perception of Ontology-Based Explanations of AI Models

Published: 01 Jan 2024, Last Modified: 26 Jul 2025CHIRA (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: For using AI models in high-stake applications it is crucial for a decision maker to understand why the model came to a certain conclusion. Ontology-based explanation techniques of artificial neural networks aim to provide explanations adapted to domain vocabulary (which is encoded using ontology) in order to make them easier to interpret and reason about. However, few studies actually explore the perception of ontology-based explanations and their effectiveness with respect to more common explanation techniques for neural networks (e.g., LIME, GradCAM, etc.). The paper proposes two benchmark datasets with different task representations (tabular and graph) and a methodology to compare users’ effectiveness of processing explanations, employing both objective (decision time, accuracy) and subjective metrics. The methodology and datasets were then used in a user study to compare several explanation representations (non-ontology-based and three ontology-based ones—textual representation of ontological inference, inference graph, and attributive). It was found that according to subjective evaluation, graph and textual explanations caused the least difficulty for the participants. Objective metrics vary with the size of the ontology, but inference graphs show good results in all the examined cases. Surprisingly, non-ontology-based explanations have almost the same positive effect on decision-making than ontology-based (although, a bit harder subjectively).
Loading