Enhancing Relation Extraction from Biomedical Texts by Large Language Models

Published: 01 Jan 2024, Last Modified: 31 Oct 2024HCI (53) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this study, we propose a novel relation extraction method enhanced by large language models (LLMs). We incorporated three relation extraction models that leverage LLMs: (1) relation extraction via in-context few-shot learning with LLMs, (2) enhancing the sequence-to-sequence (seq2seq)-based full fine-tuned relation extraction by CoT reasoning explanations generated by LLMs, (3) enhancing the classification-based full fine-tuned relation extraction by entity descriptions that are automatically generated by LLMs. In the experiment, we shot that in-context few-shot learning with LLMs suffers in biomedical relation extraction tasks. We further show that entity explanations that are generated by LLMs can improve the performance of the classification-based relation extraction in the biomedical domain. Our proposed model achieved an F-score of 85.61% on the DDIExtraction-2013 dataset, which is competitive with the state-of-the-art models.
Loading