Abstract: Explainability has become one of the most important concepts in Artificial Intelligence (AI), resulting in a complete area of study called Explainable AI (XAI). In this paper, we propose an approach for engineering explainable BDI agents based on the use of argumentation techniques. In particular, our approach is based on modelling argumentation schemes, which provide not only the reasoning patterns agents use to instantiate arguments but also templates for agents to translate arguments in an agent-oriented programming language to natural language. Thus, using our approach, agents are able to provide explanations about their mental attitudes and decision-making not only to other software agents but also to humans. This is particularly useful when agents and humans carry out tasks collaboratively.
Loading