Fine-tuning large language models for text ranking with listwise constraints

09 Sept 2025 (modified: 17 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Feature fusion, listwise, LLM, rank
TL;DR: We propose a method to improve the fine-tuning performance of text ranking models by leveraging feature fusion, incorporating customized MLP modules, and optimizing with a listwise loss.
Abstract: With the rapid adoption of large language models (LLMs) across diverse applications, retrieval augmentation has become a key factor for improving downstream performance. Recent advances show that LLM-based retrieval can substantially enhance ranking quality. In this work, we present a novel LLM-based retrieval framework optimized along three complementary dimensions: (1) a customized attention-based fusion of hidden-layer representations, (2) a dedicated multi-layer perceptron (MLP) module for enriched feature transformation, and (3) a new list-wise learning objective, ListRank loss, to capture fine-grained relevance order. Experimental results demonstrate that our model achieves state-of-the-art performance. The model is publicly available for download on HuggingFace.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 3367
Loading