Keywords: model reconciliation, probabilistic logic programming, most probable explanations (MPE)
Abstract: In human-AI interaction, effective communication relies on aligning the AI agent’s model with the human user’s mental model -- a process known as model reconciliation. However, existing model reconciliation approaches predominantly assume deterministic models, overlooking the fact that human knowledge is often uncertain or probabilistic.
To bridge this gap, we present a probabilistic model reconciliation framework that resolves inconsistencies in MPE outcome probabilities between an agent’s and a user’s models.
Our approach is built on probabilistic logic programming (PLP) using ProbLog, where explanations are generated as cost-optimal model updates that reconcile these probabilistic differences.
We develop two search algorithms - a generic baseline and an optimized version.
The latter is guided by theoretical insights and further extended with greedy and weighted variants to enhance scalability and efficiency.
Our approach is validated through a user study on explanation types and computational experiments showing that the optimized version consistently outperforms the generic baseline.
Supplementary Material: zip
Primary Area: Other (please use sparingly, only use the keyword field for more details)
Submission Number: 22200
Loading