Dependency tree positional encoding method for relation extractionOpen Website

2021 (modified: 23 Nov 2021)SAC 2021Readers: Everyone
Abstract: Dependency trees are recently used for relation extraction tasks to capture long-range relations among entities. Many studies have proposed various methods to apply dependency tree features to relation extraction models. However, most of these approaches are difficult to apply to different models especially when the overall structure of a model is changed or newly suggested for inserting information of the dependency trees. Existing approaches, which are a structure to use dependency trees such as graph convolutional networks (GCNs), do not use these information into input representations. In this paper, we propose a new method of injecting dependency tree information into the input representations of a model with positional encoding of dependency trees. Specifically, to incorporate dependency tree information with input vectors, we present a novel strategy to convert dependency tree features to positional encoding. The existing model (BERT) that applied our method achieves a state-of-the-art performance without changing the original model on the TACRED and SemEval-2010 Task 8, the standard benchmark dataset in relation extraction. Through detailed analysis, we show that our method is effective complementarily for relation extraction models, and is also easy to use for other models.
0 Replies

Loading