Dynamic Integration of Background Knowledge in Neural NLU SystemsDownload PDF

15 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.
TL;DR: In this paper we present a task-agnostic reading architecture for the dynamic integration of explicit background knowledge in neural NLU models.
Keywords: natural language processing, background knowledge, word embeddings, question answering, natural language inference
Data: [MultiNLI](https://paperswithcode.com/dataset/multinli), [SNLI](https://paperswithcode.com/dataset/snli), [SQuAD](https://paperswithcode.com/dataset/squad), [TriviaQA](https://paperswithcode.com/dataset/triviaqa)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1706.02596/code)
8 Replies

Loading