Abstract: This study introduces EventRL, a reinforcement learning approach that significantly enhances the event extraction capabilities of large language models (LLMs). EventRL addresses the challenges of instruction following and hallucination by introducing outcome supervision, which provides direct feedback on the accuracy of event extraction. The method employs specialized reward functions—Argument-F1, Average-F1, and Product-F1—to guide the model's training and improve its understanding of event structures. Our experiments on the ACE05 dataset, which includes both Held-in Test (for seen event types) and Held-out Test (for unseen event types), demonstrate that EventRL outperforms Supervised Fine-Tuning (SFT) and Few-Shot Prompting (FSP) (based on GPT4) methods for event extraction. The results further show that EventRL is particularly effective in handling unseen event types, and that the choice of reward function and the inclusion of code data can significantly improve event extraction performance.
Paper Type: long
Research Area: Information Extraction
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading