Natural Language Inference over Interaction SpaceDownload PDF

15 Feb 2018 (modified: 22 Oct 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It's noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to the strongest published system.
TL;DR: show multi-channel attention weight contains semantic feature to solve natural language inference task.
Keywords: natural language inference, attention, SoTA, natural language understanding
Code: [![github](/images/github_icon.svg) YichenGong/Densely-Interactive-Inference-Network](https://github.com/YichenGong/Densely-Interactive-Inference-Network) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=r1dHXnH6-)
Data: [GLUE](https://paperswithcode.com/dataset/glue), [MultiNLI](https://paperswithcode.com/dataset/multinli), [Quora Question Pairs](https://paperswithcode.com/dataset/quora-question-pairs), [SNLI](https://paperswithcode.com/dataset/snli)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1709.04348/code)
13 Replies

Loading