Explanation Perspectives from the Cognitive Sciences---A SurveyDownload PDF

28 Aug 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: With growing adoption of AI across fields such as healthcare, finance, and the justice system, ex- plaining an AI decision has become more important than ever before. Development of human-centric explainable AI (XAI) systems necessitates an un- derstanding of the requirements of the human-in- the-loop seeking the explanation. This includes the cognitive behavioral purpose that the explanation serves for its recipients, and the structure that the explanation uses to reach those ends. An under- standing of the psychological foundations of expla- nations is thus vital for the development of effec- tive human-centric XAI systems. Towards this end, we survey papers from the cognitive science litera- ture that address the following broad questions: (1) what is an explanation, (2) what are explanations for, and 3) what are the characteristics of good and bad explanations. We organize the insights gained therein by means of highlighting the advantages and shortcomings of various explanation structures and theories, discuss their applicability across dif- ferent domains, and analyze their utility to various types of humans-in-the-loop. We summarize the key takeaways for human-centric design of XAI systems, and recommend strategies to bridge the existing gap between XAI research and practical needs. We hope this work will spark the develop- ment of novel human-centric XAI systems.
0 Replies

Loading