Contextualized Sparse Representation with Rectified N-Gram Attention for Open-Domain Question AnsweringDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Abstract: A sparse representation is known to be an effective means to encode precise lexical cues in information retrieval tasks by associating each dimension with a unique n-gram-based feature. However, it has often relied on term frequency (such as tf-idf and BM25) or hand-engineered features that are coarse-grained (document-level) and often task-specific, hence not easily generalizable and not appropriate for fine-grained (word or phrase-level) retrieval. In this work, we propose an effective method for learning a highly contextualized, word-level sparse representation by utilizing rectified self-attention weights on the neighboring n-grams. We kernelize the inner product space during training for memory efficiency without the explicit mapping of the large sparse vectors. We particularly focus on the application of our model to phrase retrieval problem, which has recently shown to be a promising direction for open-domain question answering (QA) and requires lexically sensitive phrase encoding. We demonstrate the effectiveness of the learned sparse representations by not only drastically improving the phrase retrieval accuracy (by more than 4%), but also outperforming all other (pipeline-based) open-domain QA methods with up to 97x faster inference in SQuADopen and CuratedTrec.
Keywords: Open-Domain Question Answering, Sparse Representation, Phrase Representation, Information Retrieval
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1911.02896/code)
Original Pdf: pdf
8 Replies

Loading