Ontology-Based Explanations of Neural Networks: A User Perspective

Published: 01 Jan 2024, Last Modified: 26 Jul 2025HCI (51) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: There is a variety of methods focused on interpreting and explaining predictions obtained using neural networks, however, most of these methods are intended for experts in the field of machine learning and artificial intelligence, and not for domain experts. Ontology-based explanation methods aim to address this issue, exploiting the rationale that presenting explanations in terms of the problem domain, accessible and understandable to the human expert, can improve the understandability of explanations. However, very few studies examine real effects of ontology-based explanations and their perception by humans. On the other hand, it is widely recognized that experimental evaluation of explanation techniques is highly important and increasingly attracts attention of both AI and HCI communities. In this paper, we explore users’ interaction with ontology-based explanations of neural networks in order to a) check if such explanations simplify the task of decision-maker, b) assess and compare various forms of ontology-based explanations. We collect both objective performance metrics (i.e., decision time and accuracy) as well as subjective ones (via questionnaire). Our study has shown that ontology-based explanations can improve decision-makers performance, however, complex logical explanations not always better than simple indication of the key concepts influencing the model output.
Loading