Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Joint Inference of Entities, Relations, and Coreference
Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, Andrew McCallum
Jun 29, 2013 (modified: Jun 29, 2013)AKBC 2013 submissionreaders: everyone
Abstract:Although joint inference is an effective approach to avoid cascading of errors when inferring multiple natural language tasks, its application to information extraction has been limited to modeling only two tasks at a time, leading to modest improvements. In this paper, we focus on the three crucial tasks of automated extraction pipelines: entity tagging, relation extraction, and coreference. We propose a single, joint graphical model that represents the various dependencies between the tasks, allowing flow of uncertainty across task boundaries. Since the resulting model has a high tree-width and contains a large number of variables, we present a novel extension to belief propagation that sparsifies the domains of variables during inference. Experimental results show that our joint model consistently improves results on all three tasks as we represent more dependencies. In particular, our joint model obtains $12%$ error reduction on tagging over the isolated models.
Enter your feedback below and we'll get back to you as soon as possible.