Abstract: Exclusion is an important and universal linguistic skill that humans use to express what they do not want.
There is little research on exclusionary retrieval, where users express what they do not want to be part of the results produced for their queries.
We investigate the scenario of exclusionary retrieval in document retrieval for the first time.
We present ExcluIR, a set of resources for exclusionary retrieval, consisting of an evaluation benchmark and a training set for helping retrieval models to comprehend exclusionary queries.
The evaluation benchmark includes 3,452 high-quality exclusionary queries, each of which has been manually annotated.
The training set contains 70,293 exclusionary queries, each paired with a positive document and a negative document.
We conduct detailed experiments and analyses, obtaining three main observations:
(i) existing retrieval models with different architectures struggle to comprehend exclusionary queries effectively;
(ii) although integrating our training data can improve the performance of retrieval models on exclusionary retrieval, there still exists a gap compared to human performance; and
(iii) generative retrieval models have a natural advantage in handling exclusionary queries.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation, passage retrieval
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1423
Loading