Abstract: We propose a cost-efficient, Bayesian-inspired approach for re-ranking documents with large language models (LLMs) with pair-wise comparisons under strict inference budgets. Our method incorporates BM25 priors and TrueSkill-based uncertainty sampling to select the most informative pairs for LLM comparison. It surpasses classical sorting and binary baselines in achieving higher nDCG@10 with fewer comparisons.
Paper Type: Short
Research Area: Information Retrieval and Text Mining
Research Area Keywords: re-ranking, prompting, calibration/uncertainty, NLP in resource-constrained settings
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
Submission Number: 7863
Loading