Abstract: Dense retrieval calls for discriminative embeddings to represent the semantic relationship between query and document. It may benefit from the using of large language models (LLMs), given LLMs' strong capability on semantic understanding. However, the LLMs are learned by auto-regression, whose working mechanism is completely different from representing whole text as one discriminative embedding. Thus, it is imperative to study how to adapt LLMs properly so that they can be effectively initialized as the backbone encoder for dense retrieval.
In this paper, we propose a novel approach, called \textbf{LLaRA} (\underline{LL}M \underline{a}datpeted for dense \underline{R}etriv\underline{A}l), which performs unsupervised adaptation of LLM for its dense retrieval application. LLaRA consists of two pretext tasks: EBAE (Embedding-Based Auto-Encoding) and EBAR (Embedding-Based Auto-Regression), where the LLM is prompted to \textit{reconstruct the input sentence} and \textit{predict the next sentence} based on its text embeddings. LLaRA is simple, lightweight, but highly effective. It is used to adapt LLaMA-2-7B on the Wikipedia corpus. With a moderate steps of adaptation, it substantially improves the model's fine-tuned performances on on a variety of dense retrieval benchmarks. Notably, it results in the new state-of-the-art performances on popular benchmarks, such as passage and document retrieval on MSMARCO, and zero-shot retrieval on BEIR. The model and source code will be made publicly available to facilitate the future research.
Paper Type: long
Research Area: Information Retrieval and Text Mining
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: english
Preprint Status: There is a non-anonymous preprint (URL specified in the next question).
A1: yes
A1 Elaboration For Yes Or No: section 6
A2: yes
A2 Elaboration For Yes Or No: section 7
A3: yes
A3 Elaboration For Yes Or No: section 1
B: yes
B1: yes
B1 Elaboration For Yes Or No: section 4
B2: no
B2 Elaboration For Yes Or No: Because of text length limits, we don't discuss the license or terms, but we follow the licenses of the model and data when using them
B3: n/a
B4: n/a
B5: yes
B5 Elaboration For Yes Or No: section 4
B6: yes
B6 Elaboration For Yes Or No: section 4
C: yes
C1: yes
C1 Elaboration For Yes Or No: section 4
C2: yes
C2 Elaboration For Yes Or No: section 4
C3: yes
C3 Elaboration For Yes Or No: section 4
C4: yes
C4 Elaboration For Yes Or No: section 4
D: no
D1: n/a
D2: n/a
D3: n/a
D4: n/a
D5: n/a
E: no
E1: n/a
0 Replies
Loading