Are Language Models Robust Coreference Resolvers?Download PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Prompting LMs for coreference outperforms unsupervised coreference systems, generalizes well across domains/languages/time periods given no additional training data, but trails behind continued fine-tuning of neural models
Abstract: Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language. At the same time, pre-trained large language models (LMs) have been reported to exhibit strong zero- and few-shot learning abilities across a wide range of NLP tasks. However, prior work mostly studied this ability using artificial sentence-level datasets such as the Winograd Schema Challenge. In this paper, we assess the feasibility of prompt-based coreference resolution by evaluating instruction-tuned language models on difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We show that prompting for coreference can outperform current unsupervised coreference systems, although this approach appears to be reliant on high-quality mention detectors. Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods; yet continued fine-tuning of neural models should still be preferred if small amounts of annotated examples are available.
Paper Type: long
Research Area: Discourse and Pragmatics
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models
Languages Studied: English (main), Chinese, Arabic, Dutch, Italian, Spanish
0 Replies

Loading