How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking
Abstract: Explainable artificial intelligence (XAI) systems aim to provide users with information to help them better understand computational models and reason about why
outputs were generated. However, there are many different ways an XAI interface
might present explanations, which makes designing an appropriate and effective
interface an important and challenging task. Our work investigates how different
types and amounts of explanatory information affect user ability to utilize explanations to understand system behavior and improve task performance. The presented
research employs a system for detecting the truthfulness of news statements. In a
controlled experiment, participants were tasked with using the system to assess news
statements as well as to learn to predict the output of the AI. Our experiment compares various levels of explanatory information to contribute empirical data about
how explanation detail can influence utility. The results show that more explanation
information improves participant understanding of AI models, but the benefits come
at the cost of time and attention needed to make sense of the explanation.
0 Replies
Loading