Meta-learning from demonstrations improves compositional generalizationDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023LaReL 2022Readers: Everyone
Keywords: meta-learning, grounded language learning, compositional generalization
TL;DR: We extend meta-seq2seq to grounded environments by having the agent imagine its own meta-learning context and get good scores on Splits H and D of gSCAN
Abstract: We study the problem of compositional generalization of language-instructed agents in gSCAN. gSCAN is a popular benchmark which requires an agent to generalize to instructions containing novel combinations of words, which are not seen in the training data. We propose to improve the agent’s generalization capabilities with an architecture inspired by the Meta-Sequence-to-Sequence learning approach (Lake, 2019). The agent receives as a context a few examples of pairs of instructions and action trajectories in a given instance of the environment (a support set) and it is tasked to predict an action sequence for a query instruction for the same environment instance. The context is generated by an oracle and the instructions come from the same distribution as seen in the training data. In each training episode, we also shuffle the indices of the attributes of the observed environment states and the words of the instructions to make the agent figure out the relations between the attributes and the words from the context. Our predictive model has the standard transformer architecture. We show that the proposed architecture can significantly improve the generalization capabilities of the agent on one of the most difficult gSCAN splits: the ``adverb-to-verb” split H.
3 Replies

Loading