Disentangling Locality and Entropy in Ranking Distillation

ICLR 2026 Conference Submission17484 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: ranking, neural ranking, distillation
TL;DR: Training processes of ranking models often include multiple stages of negative mining aswell as teacher distillation, our findings show that the combination of these expensive processes is unnecessary under distillation..
Abstract: The training process of ranking models involves two key data selection decisions: a sampling strategy (which selects the data to train on), and a labeling strategy (which provides the supervision signal over the sampled data). Modern ranking systems, especially those for performing semantic search, typically use a “hard negative” sampling strategy to identify challenging items using heuristics and a distillation labeling strategy to transfer ranking “knowledge” from a more capable model. In practice, these approaches have grown increasingly expensive and complex—for instance, popular pretrained rankers from SentenceTransformers involve 12 models in an ensemble with data provenance hampering reproducibility. Despite their complexity, modern sampling and labeling strategies have not been fully ablated, leaving the underlying source of effectiveness gains unclear. Thus, to better understand why models improve and potentially reduce the expense of training effective models, we conduct a broad ablation of sampling and distillation processes in neural ranking. We frame and theoretically derive the orthogonal nature of model geometry affected by example selection and the effect of teacher ranking entropy on ranking model optimization, establishing conditions in which data augmentation can effectively improve bias in a ranking model. Empirically, our investigation on established benchmarks and common architectures shows that sampling processes that were once highly effective in contrastive objectives may be spurious or harmful under distillation. We further investigate how data augmentation—in terms of inputs and targets—can affect effectiveness and the intrinsic behavior of models in ranking. Through this work, we aim to encourage more computationally efficient approaches that reduce focus on contrastive pairs and instead directly understand training dynamics under rankings, which better represent real-world settings.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 17484
Loading