Deep Reinforcement Learning with an Action Space Defined by Natural LanguageDownload PDF

25 Apr 2024 (modified: 14 Feb 2016)ICLR 2016 workshop submissionReaders: Everyone
CMT Id: 87
Abstract: In this paper, we propose the deep reinforcement relevance network (DRRN), a novel deep architecture, to design a model for handling an action space characterized using natural language with applications to text-based games. For a particular class of games, a user must choose among a number of actions described by text, with the goal of maximizing long-term reward. In these games, the best action is typically what fits the current situation best (modeled as a state in the DRRN), also described by text. Because of the exponential complexity of natural language with respect to sentence length, there is typically an unbounded set of unique actions. Even with a constrained vocabulary, the action space is very large and sparse, posing challenges for learning. To address this challenge, the DRRN extracts separate high-level embedding vectors from the texts that describe states and actions, respectively, using a general interaction function, such as inner product, bilinear, and DNN interaction, between these embedding vectors to approximate the Q-function. We evaluate the DRRN on two popular text games, showing superior performance over other deep Q-learning architectures.
Conflicts: uw.edu, microsoft.com
0 Replies

Loading