Keywords: Information Retrieval, Retrieval, Embedding Model, Retrieval Diversity
TL;DR: We investigate the limitations of single-embedding retrievers and propose a multi-embedding retriever architecture.
Abstract: Most text retrievers generate $\textit{one}$ query vector to retrieve relevant documents. Yet, the conditional distribution of relevant documents for the query may be multimodal, e.g., representing different interpretations of the query. We first quantify the limitations of existing retrievers. All retrievers we evaluate struggle more as the distance between target document embeddings grows. To address this limitation, we develop a new retriever architecture, $\textbf{A}$utoregressive $\textbf{M}$ulti-$\textbf{E}$mbedding $\textbf{R}$etriever $(\textbf{AMER})$. Our model autoregressively generates multiple query vectors, and all the predicted query vectors are used to retrieve documents from the corpus. We show that on the synthetic vectorized data, the proposed method could capture multiple target distributions perfectly, showing 4x better performance than single embedding model. We also fine-tune our model on real-world multi-answer retrieval datasets and evaluate in-domain. \model{} presents 6 and 16\% relative gains over single-embedding baselines on two datasets we evaluate on. Furthermore, we consistently observe larger gains on the subset of dataset where the embeddings of the target documents are less similar to each other. We demonstrate the potential of using a multi-query vector retriever and open up a new direction for future work.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 22214
Loading