Towards Temporally Uncertain Explainable AI Planning

Published: 01 Jan 2022, Last Modified: 04 Nov 2024ICDCIT 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automated planning is able to handle increasingly complex applications, but can produce unsatisfactory results when the goal and metric provided in its model does not match the actual expectation and preference of those using the tool. This can be ameliorated by including methods for explainable planning (XAIP), to reveal the reasons for the automated planner’s decisions and to provide more in-depth interaction with the planner. In this paper we describe at a high-level two recent pieces of work in XAIP. First, plan exploration through model restriction, in which contrastive questions are used to build a tree of solutions to a planning problem. Through a dialogue with the system the user better understands the underlying problem and the choices made by the automated planner. Second, strong controllability analysis of probabilistic temporal networks through solving a joint chance constrained optimisation problem. The result of the analysis is a Pareto optimal front that illustrates the trade-offs between costs and risk for a given plan. We also present a short discussion on the limitations of these methods and how they might be usefully combined.
Loading