Abstract: Evaluation plays a crucial role in the advancement of information retrieval (IR) models.
However, current benchmarks, which are based on predefined domains and human-labeled data, face limitations in addressing evaluation needs for emerging domains both cost-effectively and efficiently.
To address this challenge, we propose the Automated Heterogeneous Information Retrieval Benchmark (AIR-Bench).
AIR-Bench is distinguished by three key features: 1) Automated. The testing data in AIR-Bench is automatically generated by large language models (LLMs) without human intervention. 2) Heterogeneous. The testing data in AIR-Bench is generated with respect to diverse tasks, domains and languages. 3) Dynamic. The domains and languages covered by AIR-Bench are constantly augmented to provide an increasingly comprehensive evaluation benchmark for community developers.
We develop a reliable and robust data generation pipeline to automatically create diverse and high-quality evaluation datasets based on real-world corpora. Our findings demonstrate that the generated testing data in AIR-Bench aligns well with human-labeled testing data, making AIR-Bench a dependable benchmark for evaluating IR models. The resources in AIR-Bench will be made publicly available.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: dense retrieval, evaluation, benchmarks
Contribution Types: Data resources
Languages Studied: English, Chinese, German, French, Spanish, Japanese, Korean, Russian, Hindi, Arabic, Bengali, Indonesian, Persian
Submission Number: 280
Loading