Surprise-Guided Search for Learning Task Specifications From DemonstrationsDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Learning from Demonstrations, Specification Mining, Markov Decision Process, Inverse Reinforcement Learning, Formal Methods, Symbolic Learning
Abstract: This paper considers the problem of learning temporal task specifications, e.g. automata and temporal logic, from expert demonstrations. Task specifications are a class of sparse memory augmented rewards with explicit support for temporal and Boolean composition. Three features make learning temporal task specifications difficult: (1) the (countably) infinite number of tasks under consideration, (2) an a-priori ignorance of what memory is needed to encode the task, and (3) the discrete solution space - typically addressed by (brute force) enumeration. To overcome these hurdles, we propose Demonstration Informed Specification Search (DISS): a family of algorithms requiring only black box access to (i) a maximum entropy planner and (ii) a task sampler from labeled examples. DISS works by alternating between (i) conjecturing labeled examples to make the provided demonstrations less surprising and (ii) sampling tasks consistent with the conjectured labeled examples. We provide a concrete implementation of DISS in the context of tasks described by Deterministic Finite Automata, and show that DISS is able to efficiently identify tasks from only one or two expert demonstrations.
Supplementary Material: pdf
28 Replies

Loading