Keywords: Dataset distillation, Learning-to-rank, Sampling
Abstract: In real-world search settings, learning to rank (LtR) models are trained and tuned repeatedly using large amounts of data, thus spending significant time and computing resources, and raising efficiency and sustainability concerns. One way to address these concerns is to reduce the size of training datasets. Dataset sampling and distillation are two classes of methods introduced to enable significant reduction in dataset size, while achieving comparable performance to training with complete data.
In this work, we perform a comparative analysis of dataset distillation and sampling methods in the context of LtR. We evaluate gradient matching and distribution matching dataset distillation approaches - shown to be effective in computer vision - and show how these algorithms can be adjusted for the LtR task. Our empirical analysis, using three LtR datasets, indicates that, in contrast to previous studies in computer vision, the selected distillation methods do not outperform random sampling. Our code and experimental settings are released alongside the paper.
Submission Number: 39
Loading