Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking

ACL ARR 2024 December Submission270 Authors

12 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Entity Linking (EL) is a fundamental task for Information Extraction and Knowledge Graphs. The general form of EL (i.e., end-to-end EL) aims to find mentions in the given document and then link the mentions to corresponding entities in a specific knowledge base. Recently, the paradigm of retriever-reader promotes the progress of end-to-end EL, benefiting from the advantages of dense entity retrieval and machine reading comprehension. However, the existing studies only train the retriever and the reader separately in a pipeline manner, thus ignoring the benefit that the interactions between the retriever and the reader can bring to the task. To advance the retriever-reader paradigm to perform more effectively on end-to-end EL, we propose $\text{BEER}^2$, a Bidirectional End-to-End training framework for Retriever and Reader. Through our designed bidirectional end-to-end training, $\text{BEER}^2$ guides the retriever and the reader to learn from each other, make progress together, and ultimately improve EL performance. Extensive experiments on benchmarks of multiple domains demonstrate the effectiveness of our proposed $\text{BEER}^2$.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: knowledge graphs, entity linking/disambiguation, knowledge base construction
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English, Chinese
Submission Number: 270
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview