Passage Ranking with Weak SupervisionDownload PDF

20 Mar 2019 (modified: 02 Jul 2019)ICLR 2019 Workshop LLD Blind SubmissionReaders: Everyone
  • Keywords: Passage Ranking, Weak Supervision, BERT Models
  • TL;DR: We propose a weak supervision training pipeline based on the data programming framework for ranking tasks, in which we train a BERT-base ranking model and establish the new SOTA.
  • Abstract: In this paper, we propose a \textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources. Empirically, we consider two sources of weak supervision signals, unsupervised ranking functions and semantic feature similarities. We train a BERT-based passage-ranking model (which achieves new state-of-the-art performances on two benchmark datasets with full supervision) in our weak supervision framework. Without using ground-truth training labels, BERT-PR models outperform BM25 baseline by a large margin on all three datasets and even beat the previous state-of-the-art results with full supervision on two of datasets.
2 Replies