End-to-end Learning of Logical Rules for Enhancing Document-level Relation ExtractionDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: A SOTA approach for jointly learning logical rules and document-level relation extraction.
Abstract: Document-level relation extraction (DocRE) aims to extract relations between entities in a whole document. One of the pivotal challenges of DocRE is to capture the intricate interdependencies between relations of entity pairs. Previous methods have shown that logical rules are able to explicitly help capture such interdependencies. These methods either learn logical rules to refine the output of a trained DocRE model, or first learn logical rules from annotated data and then inject the learnt rules to a DocRE model using auxiliary training objective. In this paper, we argue that these learning pipelines may suffer from the issue of error propagation. To mitigate this issue, we propose \emph{Joint Modeling Relation extraction and Logical rules} or \emph{JMRL} for short, a novel rule-based framework that jointly learns both a DocRE model and logical rules in an end-to-end fashion. Specifically, we parameterize a rule reasoning module in JMRL to simulate the inference of logical rules, thereby explicitly modeling the reasoning process. We also introduce an auxiliary loss and a residual conection mechanism in JMRL to better reconcile the DocRE model and the rule reasoning module. Experimental results on two benchmark datasets demonstrate that the proposed JMRL framework is consistently superior to existing rule-based frameworks on both datasets, improving five baseline models for DocRE by a significant margin.
Paper Type: long
Research Area: Information Extraction
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview