Contrastive explanations of BDI agents

Published: 19 Dec 2025, Last Modified: 05 Jan 2026AAMAS 2026 FullEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable Agents, Contrastive explanations, Belief-Desire-Intention (BDI)
TL;DR: We define a mechanism for BDI agents to provide contrastive explanations ("why did you do X instead of Y?") and evaluate this mechanism computational and with human subjects.
Abstract: The ability of autonomous systems to provide explanations is important for supporting transparency and aiding the development of (appropriate) trust. Prior work has defined a mechanism for Belief-Desire-Intention (BDI) agents to be able to answer questions of the form "why did you do action _X_?''. However, we know that we ask _contrastive_ questions ("why did you do _X_ _instead of_ _F_?''). We therefore extend previous work to be able to answer such questions. A computational evaluation shows that using contrastive questions yields a significant reduction in explanation length. A human subject evaluation was conducted to assess whether such contrastive answers are preferred, and how well they support trust development and transparency. We found some evidence for contrastive answers being preferred over full (non-contrastive) answers, and some evidence that they led to higher trust, perceived understanding, and confidence in the system's correctness. We also evaluated the benefit of providing explanations at all. Surprisingly, there was not a clear benefit, and in some situations we found evidence that providing a (full) explanation was worse than not providing any explanation.
Area: Engineering and Analysis of Multiagent Systems (EMAS)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 24
Loading