Leveraging PDDL to Make Inscrutable Agents Interpretable: A Case for Post Hoc Symbolic Explanations for Sequential-Decision Making ProblemsDownload PDF

Jun 08, 2021 (edited Sep 11, 2021)XAIP 2021Readers: Everyone
  • Keywords: Post Hoc Explanation, Model Learning
  • TL;DR: Learn approximate PDDL models to explain sequential decisions made using models that are inscrutable to the human in the loop.
  • Abstract: There has been quite a bit of interest in developing explanatory techniques within ICAPS-community for various planning flavors, as evidenced by the popularity of the XAIP workshop in the past few years. Though most existing works in XAIP focus on creating explanatory techniques for native planning-based systems that leverage human-specified models. While this has led to the development of valuable techniques and tools, our community tends to overlook a very important avenue where the XAIP techniques, particularly ones designed around symbolic human-readable models, could make a practical and immediate impact. Namely to help generate symbolic post hoc explanations for sequential decisions generated through inscrutable decision-making systems, including Reinforcement-Learning and any inscrutable model-based planning/approximate dynamic programming methods. Through this paper, we hope to discuss how we could generate such post hoc explanations. Motivate how one could use the current XAIP techniques to address many explanatory challenges within this realm and also discuss some of the open research challenges that arise when we try to apply our methods within this new application context.
4 Replies