Abstract: Document-level relation extraction aims to identify semantic relations between target entities from the document.Most of the existing work roughly treats the document as a long sequence and produces target-agnostic representation for relation prediction, limiting the model’s ability to focus on the relevant context of target entities. In this paper, we reformulate the document-level relation extraction task and propose a NA-aware machine Reading Comprehension (NARC) model to tackle this problem. Specifically, the input sequence formulated as the concatenation of a head entity and a document is fed into the encoder to obtain comprehensive target-aware representations for each entity. In this way, the relation extraction task is converted into a reading comprehension problem by taking all the tail entities as candidate answers. Then, we add an artificial answer $$\texttt {NO-ANSWER}$$ (NA) for each query and dynamically generate a NA score based on the decomposition and composition of all candidate tail entity features, which finally weighs the prediction results to alleviate the negative effect of having too many no-answer instances after task reformulation. Experimental results on DocRED with extensive analysis demonstrate the effectiveness of NARC.
0 Replies
Loading