LLM with Relation Classifier for Document-Level Relation Extraction

ACL ARR 2024 April Submission783 Authors

16 Apr 2024 (modified: 02 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) create a new paradigm for natural language processing. Despite their advancement, LLM-based methods still lag behind traditional methods in document-level relation extraction (DocRE), a critical task for understanding complex entity relations. To address this issue, this paper first investigates the causes of the performance gap, identifying the dispersion of attention by LLMs due to entity pairs without relations as a primary factor. We then introduce a novel classifier-LLM approach to DocRE. The proposed approach begins with a classifier specifically designed to select entity pair candidates exhibiting potential relations and thereby feeds them to LLM for the final relation extraction. This method ensures that during inference, the LLM's focus is directed primarily at entity pairs with relations. Experiments on DocRE and Re-DocRE benchmarks reveal that our method significantly outperforms recent LLM-based DocRE methods.
Paper Type: Short
Research Area: Information Extraction
Research Area Keywords: document-level extraction
Languages Studied: English
Submission Number: 783
Loading