Robust Natural Language Representation Learning for Natural Language Inference by Projecting Superficial Words outDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Original Pdf: pdf
TL;DR: We use neural networks to project superficial information out for natural language inference by defining and identifying the superficial information from the perspective of first-order logic.
Abstract: In natural language inference, the semantics of some words do not affect the inference. Such information is considered superficial and brings overfitting. How can we represent and discard such superficial information? In this paper, we use first order logic (FOL) - a classic technique from meaning representation language – to explain what information is superficial for a given sentence pair. Such explanation also suggests two inductive biases according to its properties. We proposed a neural network-based approach that utilizes the two inductive biases. We obtain substantial improvements over extensive experiments.
Keywords: natural language inference, first order logic
4 Replies