A Family of Cognitively Realistic Parsing Environments for Deep Reinforcement LearningDownload PDF

Anonymous

09 Mar 2022 (modified: 05 May 2023)Submitted to CMCL 2022Readers: Everyone
Keywords: reinforcement learning, incremental chart parsing, self-paced reading, Q-learning, Actor-Critic
TL;DR: We introduce a family of cognitively realistic chart-parsing environments to evaluate potential psycholinguistic implications of RL algorithms.
Abstract: The hierarchical syntactic structure of natural language is a key feature of human cognition that enables us to recursively construct arbitrarily long sentences supporting communication of complex, relational information. In this work, we describe a framework in which learning cognitively-realistic left-corner parsers can be formalized as a Reinforcement Learning problem, and introduce a family of cognitively realistic chart-parsing environments to evaluate potential psycholinguistic implications of RL algorithms. We report how several baseline Q-learning and Actor Critic algorithms, both tabular and neural, perform on subsets of the Penn Treebank corpus. We observe a sharp increase in difficulty as parse trees get slightly more complex, indicating that hierarchical reinforcement learning might be required to solve this family of environments.
4 Replies

Loading