Abstract:Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.
TL;DR:We propose a new recurrent memory architecture that can track common sense state changes of entities by simulating the causal effects of actions.
Keywords:representation learning, memory networks, state tracking
Enter your feedback below and we'll get back to you as soon as possible.