Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Dynamic Integration of Background Knowledge in Neural NLU Systems
Dirk Weissenborn, Tomas Kocisky, Chris Dyer
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.
TL;DR:In this paper we present a task-agnostic reading architecture for the dynamic integration of explicit background knowledge in neural NLU models.
Keywords:natural language processing, background knowledge, word embeddings, question answering, natural language inference
Enter your feedback below and we'll get back to you as soon as possible.