Learning to Solve New sequential decision-making Tasks with In-Context Learning

Published: 07 Nov 2023, Last Modified: 04 Dec 2023FMDM@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: In-context Learning, Sequential Deicision Making
Abstract: Training autonomous agents that can generalize to new tasks from a small number of demonstrations is a long-standing problem in machine learning. Recently, transformers have displayed impressive few-shot learning capabilities on a wide range of domains in language and vision. However, the sequential dcision-making setting poses additional challenges and has a much lower tolerance for errors since the environment's stochasticity or the agent's wrong actions can lead to unseen (and sometimes unrecoverable) states. In this paper, we use an illustrative example to show that a naive approach to using transformers in sequential decision-making problems does not lead to few-shot learning. We then demonstrate how training on sequences of trajectories with certain distributional properties leads to few-shot learning in new sequential decision-making tasks. We investigate different design choices and find that larger model and dataset sizes, as well as more task diversity, environment stochasticity and trajectory burstiness, all result in better generalization to out-of-distribution tasks given just a few demonstrations per task. Leveraging these insights, we demonstrate our model's generalization to unseen MiniHack and Procgen tasks via in-context learning from just a handful of expert demonstrations per task.
Submission Number: 23