Inductive Generalization in Reinforcement Learning from Specifications

Published: 28 Oct 2023, Last Modified: 04 Dec 2023GenPlan'23EveryoneRevisionsBibTeX
Abstract: Reinforcement Learning (RL) from logical specifications is a promising approach to learning control policies for complex long-horizon tasks. While these algorithms showcase remarkable scalability and efficiency in learning, a persistent hurdle lies in their limited ability to generalize the policies they generate. In this work, we present an inductive framework to improve policy generalization from logical specifications. We observe that logical specifications can be used to define a class of inductive tasks known as repeated tasks. These are tasks with similar overarching goals but differing inductively in low-level predicates and distributions. Hence, policies for repeated tasks should also be inductive. To this end, we present a compositional approach that learns policies for unseen repeated tasks by training on few repeated tasks only. Our approach is evaluated on challenging control benchmarks with continuous state and action spaces, showing promising results in handling long-horizon tasks with improved generalization.
Submission Number: 75