A multi-level explainability framework for engineering and understanding BDI agents

Published: 01 Jan 2025, Last Modified: 14 Apr 2025Auton. Agents Multi Agent Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As the complexity of software systems rises, explainability - i.e. the ability of systems to provide explanations of their behaviour - becomes a crucial property. This is true for any AI-based systems, including autonomous systems that exhibit decisionmaking capabilities such as multi-agent systems. Although explainabil- ity is generally considered useful to increase the level of trust for end-users, we argue it is also an interesting property for software engineers, developers, and designers to debug and validate the system’s behaviour. In this paper, we propose a multi-level explainability framework for BDI agents to generate explanations of a running system from logs at different levels of abstraction, tailored to different users and their needs. We describe the mapping from logs to explanations, and present a prototype tool based on the JaCaMo platform which implements the framework.
Loading