From POMDP Executions to Probabilistic Axioms.

Daniele Meli, Giulio Mazzi, Alberto Castellini, Alessandro Farinelli

13 Oct 2024 (modified: 26 Oct 2024)OVERLAY@AI*IA 2022EveryoneRevisionsCC BY-SA 4.0
Abstract: Partially Observable Markov Decision Processes (POMDPs) allow modeling systems with uncertain state using probability distributions over states (called beliefs). However, in complex domains, POMDP solvers must explore large belief spaces, which is computationally intractable. One solution is to introduce domain knowledge to drive exploration, in the form of logic specifications. However, defining effective specifications may be challenging even for domain experts. We propose an approach based on inductive logic programming to learn specifications with confidence level from observed POMDP executions. We show that the learning approach converges to robust specifications as the number of examples increases.
Loading