In-context Reinforcement Learning with Algorithm DistillationDownload PDF

05 Oct 2022 (modified: 05 May 2023)FMDM@NeurIPS2022Readers: Everyone
Keywords: Reinforcement Learning, Transformers, Learning to Learn, Large Language Models
TL;DR: We present Algorithm Distillation, a method that outputs an in-context RL algorithm by treating learning to reinforcement learn as a sequential prediction problem.
Abstract: We propose Algorithm Distillation (AD), a method for distilling reinforcement learning (RL) algorithms into neural networks by modeling their training histories with a causal sequence model. Algorithm Distillation treats learning to reinforcement learn as an across-episode sequential prediction problem. A dataset of learning histories is generated by a source RL algorithm, and then a causal transformer is trained by autoregressively predicting actions given their preceding learning histories as context. Unlike sequential policy prediction architectures that distill post-learning or expert sequences, AD is able to improve its policy entirely in-context without updating its network parameters. We demonstrate that AD can reinforcement learn in-context in a variety of environments with sparse rewards, combinatorial task structure, and pixel-based observations, and find that AD learns a more data-efficient RL algorithm than the one that generated the source data.
0 Replies

Loading