PaRaDe: Passage Ranking using Demonstrations with LLMs

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Information Retrieval and Text Mining
Submission Track 2: Language Modeling and Analysis of Language Models
Keywords: in-context learning, demonstration, query likelihood, re-ranking, question generation
TL;DR: We investigate the uses of demonstrations for LLM-based re-ranking and question generation.
Abstract: Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this work, we improve LLM-based re-ranking by algorithmically selecting few-shot demonstrations to include in the prompt. Our analysis investigates the conditions where demonstrations are most helpful, and shows that adding even one demonstration is significantly beneficial. We propose a novel demonstration selection strategy based on difficulty rather than the commonly used semantic similarity. Furthermore, we find that demonstrations helpful for ranking are also effective at question generation. We hope our work will spur more principled research into question generation and passage ranking.
Submission Number: 4402
Loading