Abstract: Large-scale LLMs have driven listwise reranking research, achieving impressive state-of-the-art results. However, their massive parameter counts and limited context sizes limit efficient reranking. To address this, we present LiT5, a family of efficient listwise rerankers based on the T5 model. Our approach demonstrates competitive reranking effectiveness compared to listwise LLM rerankers, with far fewer parameters, greater computational efficiency, and the ability to rerank more passages in a single shot. Our models consistently deliver strong effectiveness with as few as 220M parameters, offering a scalable solution for listwise reranking. Code and scripts for reproducibility are available at https://github.com/castorini/rank_llm.
Loading