Keywords: Language Grounding, Grounded Language Learning, Conversational Grounding, Large Language Models, Text-based Games, Lookahead LLMs, Multi-token LLMs
TL;DR: This paper proposes simple and efficient Lookahead LLMs and demonstrates their applicability in improving the training speed of LLM agents in interactive text-based games with the aim of grounded language learning.
Abstract: The cross-modal grounding of LLMs has recently garnered significant attention, while grounding them in textual interactions has been less explored. As the first of its kind, the GLAM framework utilises LLMs as agents in interactive text-based games to investigate their grounding capabilities. However, it faces the challenge of low computational efficiency, which hinders further experiments. This paper proposes the use of Lookahead models for action selection, demonstrating through empirical results that the approach can substantially improve training speed, achieving performance gains relative to the size of the action space.
Archival Status: Archival
Paper Length: Short Paper (up to 4 pages of content)
Submission Number: 143
Loading