Beyond Reproducibility: Advancing Zero-shot LLM Reranking Efficiency with Setwise Insertion

Published: 01 Jan 2025, Last Modified: 07 Oct 2025SIGIR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This study presents a comprehensive reproducibility analysis and extension of the Setwise prompting method for zero-shot ranking with Large Language Models (LLMs), as proposed by Zhuang et al. We evaluate the method's effectiveness and efficiency compared to traditional Pointwise, Pairwise, and Listwise approaches in document ranking tasks. Our reproduction confirms the findings of Zhuang et al., highlighting the trade-offs between computational efficiency and ranking effectiveness in Setwise methods. Building on these insights, we introduce Setwise Insertion, a novel approach that leverages the initial document ranking as prior knowledge, reducing unnecessary comparisons and uncertainty by prioritizing candidates more likely to improve the ranking results. Experimental results across multiple LLM architectures - Flan-T5, Vicuna, and Llama - show that Setwise Insertion yields a 31% reduction in query time, a 23% reduction in model inferences, and a slight improvement in reranking effectiveness compared to the original Setwise method. These findings highlight the practical advantage of incorporating prior ranking knowledge into Setwise prompting for efficient and accurate zero-shot document reranking.
Loading