Revisiting the Roles of “Text” in Text GamesDownload PDF

28 Mar 2022 (modified: 05 May 2023)LNLSReaders: Everyone
TL;DR: We find combining semantic and non-semantic representations can be complementary for different RL challenges in text games, while each alone works worse.
Abstract: Text games present opportunities for natural language understanding (NLU) methods to tackle reinforcement learning (RL) challenges. However, recent work has questioned the necessity of NLU by showing random text hashes could perform decently. In this paper, we pursue a fine-grained investigation into the roles of text in the face of different RL challenges, and reconcile that semantic and non-semantic language representations could be complementary rather than contrasting. Concretely, we propose a simple scheme to extract relevant contextual information into an approximate state hash as extra input for an RNN-based text agent. Such a lightweight plug-in achieves competitive performance with state-of-the-art text agents using advanced NLU techniques such as knowledge graph and passage retrieval, suggesting non-NLU methods might suffice to tackle the challenge of partial observability. However, if we remove RNN encoders and use approximate or even ground-truth state hash alone, the model performs miserably, which confirms the importance of semantic function approximation to tackle the challenge of combinatorially large observation and action spaces. Our findings and analysis provide new insights for designing better text game task setups and agents.
Track: Non-Archival (will not appear in proceedings)
Acl Rolling Review: https://openreview.net/forum?id=F_9GY8mIRSw
0 Replies

Loading