Learning Recurrent Span Representations for Extractive Question AnsweringDownload PDF

23 Apr 2024 (modified: 22 Oct 2023)Submitted to ICLR 2017Readers: Everyone
Abstract: The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into separate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.’s baseline by > 50%.
TL;DR: We present a globally normalized architecture for extractive question answering that contains explicit representations of all possible answer spans.
Keywords: Natural language processing
Conflicts: cs.washington.edu, google.com
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1611.01436/code)
7 Replies

Loading