Abstract: As AI systems come to permeate human society, there is an increasing need for such systems to explain their actions, conclusions, or decisions. This is presently fuelling a surge in interest in machine-generated explanation in the field of explainable AI. In this chapter, we examine work on explanations in areas ranging from AI to philosophy, psychology, and cognitive science. We point to different notions of explanation that are at play in these areas. We further discuss the theoretical work in philosophy and psychology on (good) explanation and its implications for the research on machine-generated explanations. Lastly, we consider the pragmatic nature of explanations and showcase its importance in the context of trust and fidelity. Throughout the chapter we suggest paths for further research on explanation in AI, psychology, philosophy, and cognitive science.
Loading