How Humans Explain the Difference in the Quality of Plans -- A User Study

Published: 02 Sept 2025, Last Modified: 11 Sept 2025HAXP 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, Artificial Intelligence Planning, Human Centred AI, Human in the Loop
Abstract: Recent advances in plan explanation have used abstractions to produce explanations. We consider the task of explaining why there is a difference in the quality of plans produced for a planning problem, $\Pi$, and the same problem constrained in some way, $\Pi + c$. The method involves abstracting away details of the planning problems until the difference in the quality of plans they support is minimised. It is not known whether humans use abstractions to explain these differences, and if so, what types of properties these abstractions have. We present the results of a qualitative user study investigating this. We tasked participants with explaining the difference in the quality of plans and found that users do indeed use abstractions to explain differences. We extract a set of properties that these abstractions satisfy, which can be used in automatic abstraction for explanation generation.
Paper Type: New Full Paper
Submission Number: 6
Loading