Abstract: Efficiently learning interpretable policies for complex tasks from demonstrations is a challenging problem. We present Hierarchical Inference with Logical Options (HILO), a novel learning algorithm that learns to imitate expert demonstrations by learning the rules that the expert is following. The rules are represented as linear temporal logic (LTL) formulas, which are interpretable and capable of encoding complex behaviors. Unlike previous works, which learn rules from high-level propositions, HILO learns rules by taking both propositions and low-level trajectories as input. It does this by defining a Bayesian model over LTL formulas, propositions, and low-level trajectories. The Bayesian model bridges the gap from formula to low-level trajectory by using a planner to find an optimal policy for a given LTL formula. Stochastic variational inference is then used to find a posterior distribution over formulas and policies given expert demonstrations. We show that by learning rules from both propositions and low-level states, HILO outperforms previous work on a rule-learning task and on four planning tasks while needing less data. We also validate HILO in the real world by teaching a robotic arm a complex packing task.
Loading