How Good is a Recommender in Machine-Assisted Cross Document Event Coreference Resolution Annotation?Download PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Annotating cross document event coreference links is a tedious task that requires annotators to have near-oracle knowledge of a document collection. The heavy cognitive load of this task decreases overall annotation quality while inevitably increasing latency. To support annotation efforts, machine-assisted recommenders can sample likely coreferent events for a given target event, thus eliminating the burden of examining large numbers of true negative pairs. However, there has been little to no work in evaluating the effectiveness of recommender approaches, particularly for the task of event coreference. To this end, we first create a simulated version of recommender based annotation for cross document event coreference resolution. Then, we adapt an existing method as the model governing recommendations. And finally, we introduce a novel method to assess the simulated recommender by evaluating an annotator-centric Recall-Annotation effort tradeoff.
0 Replies

Loading