Reusable Options through Gradient-based Meta Learning

Published: 28 Mar 2023, Last Modified: 28 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Hierarchical methods in reinforcement learning have the potential to reduce the amount of decisions that the agent needs to perform when learning new tasks. However, finding a reusable useful temporal abstractions that facilitate fast learning remains a challenging problem. Recently, several deep learning approaches were proposed to learn such temporal abstractions in the form of options in an end-to-end manner. In this work, we point out several shortcomings of these methods and discuss their potential negative consequences. Subsequently, we formulate the desiderata for reusable options and use these to frame the problem of learning options as a gradient-based meta-learning problem. This allows us to formulate an objective that explicitly incentivizes options which allow a higher-level decision maker to adjust in few steps to different tasks. Experimentally, we show that our method is able to learn transferable components which accelerate learning and performs better than existing prior methods developed for this setting. Additionally, we perform ablations to quantify the impact of using gradient-based meta-learning as well as other proposed changes.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://www.youtube.com/watch?v=Dp5a20y9ohw
Code: https://github.com/Kuroo/FAMP
Assigned Action Editor: ~Adam_M_White1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 717
Loading