Abstract: In this work, we study the effect of annotation guidelines--textual descriptions of event types and arguments, when instruction-tuning large language models for event extraction. We conducted a series of experiments with both human-provided and machine-generated guidelines in both full- and low-data settings. Our results demonstrate the promise of annotation guidelines when there is a decent amount of training data and highlight its effectiveness in improving cross-schema generalization and low-frequency event-type performance.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: Event Extraction, Large Language Models, Information Extraction, Instruction Tuning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5704
Loading