Compositional Instruction Following with Language Models and Reinforcement Learning

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: reinforcement learning, language models, composition, NLP
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Uses compositional value functions and LLMs to solve RL language instruction following tasks
Abstract: Combining reinforcement learning with language grounding is challenging as the agent needs to explore the environment for different language commands at the same time. We present a method to reduce the sample complexity of RL tasks specified with language by using compositional policy representations. We evaluate our approach in an environment requiring reward function approximation and demonstrate compositional generalization to novel tasks. Our method significantly outperforms the previous best non-compositional baseline in terms of sample complexity on 162 tasks. Our compositional model attains a success rate equal to an oracle policy's upper-bound performance of 92%. With the same number of environment steps the baseline only reaches a success rate of 80%.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4155
Loading