Abstract: With the increasing use of AI, recent research in human–computer interaction explores Explainable AI (XAI) to make AI advice more interpretable. While research addresses the effects of incorrect AI advice on AI-assisted decision-making, the impact of incorrect explanations is neglected so far. Additionally, recent work shows that not only different explanation modalities impact decision-makers, but also human factors play a critical role. To analyze relevant phenomena influencing AI-assisted decision-making, this work explores the impacting factors by conceptualizing theories of appropriate reliance and taking the first steps toward empirical evidence. We show that humans’ reliance on AI and the human–AI team performance are impacted by imperfect XAI in a study with 136 participants. Additionally, we find that cognitive styles affect decision-making in different explanation modalities. Hence, we shed light on diverse factors that impact human–AI collaboration and provide guidelines for designers to tailor such human–AI collaboration systems to individuals’ needs.
External IDs:dblp:journals/tiis/SpitzerMTFPK25
Loading