On Event Individuation for Document-Level Information Extraction

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Information Extraction
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Information Extraction, Template Filling, Events, Reproducibility
TL;DR: We show that disagreements about how to count events causes trouble for evaluation of document-level information extraction.
Abstract: As information extraction (IE) systems have grown more adept at processing whole documents, the classic task of *template filling* has seen renewed interest as a benchmark for document-level IE. In this position paper, we call into question the suitability of template filling for this purpose. We argue that the task demands definitive answers to thorny questions of *event individuation* — the problem of distinguishing distinct events — about which even human experts disagree. Through an annotation study and error analysis, we show that this raises concerns about the usefulness of template filling metrics, the quality of datasets for the task, and the ability of models to learn it. Finally, we consider possible solutions.
Submission Number: 1033
Loading