Joint Entity and Relation Extraction Based on Prompt Learning and Multi-channel Heterogeneous Graph Enhancement

Published: 01 Jan 2024, Last Modified: 14 Aug 2025ISPA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Joint extraction of entity and relation is crucial in information extraction, aiming to extract all relation triples from unstructured text. However, current joint extraction methods face two main issues. Firstly, they rarely consider the semantic information of entity and relation labels, leading to models that fail to fully understand and utilize the rich semantics in these labels, thereby limiting their performance. Secondly, although table-filling methods are widely used, they focus only on the start or end positions and ignore deep interactions between tables, relying solely on word-level information. To address these issues, we propose the P-MHE framework based on prompt learning and multi-channel heterogeneous graph enhancement. First, we use prompt templates to construct semantic nodes for entity and relation type labels, initializing them along with words as nodes in a heterogeneous graph. We iteratively fuse these semantic nodes through a message-passing mechanism to obtain node representations suitable for entity and relation extraction tasks. Secondly, we design a multi-channel heterogeneous graph to model node relationships from different perspectives, enhancing feature interactions among different types of nodes. Finally, we aggregate the semantic node information of entity and relation type labels after iteration, constructing separate decoding tables for each entity and relation type to better adapt to their respective characteristics. We evaluated our model on four public datasets. Experimental results show that P-MHE outperforms existing models on multiple public datasets. Extensive additional experiments further validate the effectiveness of our model.
Loading