FINE: LLM Prompt Tuning Fused with Internal and External Knowledge for EAE

Published: 2025, Last Modified: 15 Jan 2026IJCNN 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs) demonstrate remarkable potential in Event Argument Extraction (EAE) tasks due to their powerful capabilities in contextual understanding and semantic generation. However, the absence of event schema knowledge limits their performance in these tasks. Additionally, we observe a strong correlation between argument roles and entity types, which is often disregarded in prevailing models. To address these challenges, we propose FINE, a prompt tuning approach Fused with InterNal and External knowledge for low-resource EAE based on generative framework with LLM, which integrates global event schema knowledge and local entity information. Specifically, FINE employs External Knowledge (EK) Prompt Generator to construct external knowledge prompts with high event-awareness, which can help better capture the semantics of roles and their diversity among various events. Furthermore, we propose RAEA, a Role-Associated Entity Argument candidate mechanism to filter relevant entities within the context, effectively reducing interference from irrelevant entities. Experimental results on ACE05-EN and ERE-EN datasets demonstrate that our proposed FINE model achieves significant improvements in EAE, particularly in low-resource scenarios.
Loading