YESNO-PRO: A HIGH-PERFORMANCE POINTWISE RERANKING ALGORITHM BRIDGING ENCODERDECODER AND DECODER-ONLY LLMS
Keywords: zero-shot text reranking, Large Language Models
TL;DR: YESNO-PRO: A HIGH-PERFORMANCE POINTWISE RERANKING ALGORITHM BRIDGING ENCODERDECODER AND DECODER-ONLY LLMS
Abstract: Recent research has shown significant progress in the field of zero-shot text reranking for large language models (LLMs). Traditional pointwise approaches prompt the LLM to output relevance labels such as "yes/no" or fine-grained labels, but they have several drawbacks. Firstly, these prompts struggle to capture complex correlations between queries and passages and lack robustness for outputs not covered by predefined labels. Secondly, ranking scores rely solely on the likelihood of relevance labels, leading to potential noise and bias. Lastly, existing pointwise approaches are not supported by decoder-only LLMs, as ranking requires LLMs to output prediction probabilities. In response to these challenges, a novel pointwise approach called yesno-pro has been designed, which redefines both prompt design and score computation mechanisms to better align with the intrinsic nature of text reranking. Additionally, a comprehensive reranking framework based on LLM services has been proposed to support concurrent ranking calls and quickly adapt to any open-source decoder-only large models. Experimental results have demonstrated that this method outperforms existing pointwise and some pairwise/listwise methods on TREC19/20 and BEIR datasets, achieving the state-of-the-art performance. Due to its concurrency features, this work is applicable to practical applications with high real-time requirements.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3400
Loading