Keywords: Explanation, Human-AI complementarity, Decision theory
TL;DR: We emphasize the value of complementary information in AI-assisted human decision making.
Abstract: Multiple agents are increasingly combined to make decisions with the expectation of achieving complementary performance, where the decisions they make
together outperform those made individually. However, knowing how to improve
the performance of collaborating agents requires knowing what information and
strategies each agent employs. With a focus on human-AI pairings, we contribute a
decision-theoretic framework for characterizing the value of information. By defining complementary information, our approach identifies opportunities for agents to
better exploit available information–in AI-assisted decision workflows. We present
a novel explanation technique (ILIV-SHAP) that adapts SHAP explanations to
highlight human-complementing information. We validate the effectiveness of
the framework on examples from chest X-ray diagnosis and deepfake detection
and ILIV-SHAP through a study of human-AI decision-making. We also find that
presenting ILIV-SHAP with AI predictions leads to reliably greater reductions in
error over non-AI assisted decisions more than vanilla SHAP.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 14178
Loading