Evidence Aggregation for Answer Re-Ranking in Open-Domain Question AnsweringDownload PDF

15 Feb 2018 (modified: 15 Sept 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets.
TL;DR: We propose a method that can make use of the multiple passages information for open-domain QA.
Keywords: Question Answering, Deep Learning
Code: [![github](/images/github_icon.svg) shuohangwang/mprc](https://github.com/shuohangwang/mprc)
Data: [QUASAR](https://paperswithcode.com/dataset/quasar-1), [QUASAR-T](https://paperswithcode.com/dataset/quasar-t), [SearchQA](https://paperswithcode.com/dataset/searchqa), [TriviaQA](https://paperswithcode.com/dataset/triviaqa)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/evidence-aggregation-for-answer-re-ranking-in/code)
7 Replies

Loading