Abstract: Events refer to specific occurrences, incidents, or happenings that take place under a particular background. Event reasoning aims to reason according to certain relations. The cutting-edge techniques for event reasoning play crucial and fundamental abilities underlying various natural language processing applications. Large language models (LLMs) have made significant advancements in event reasoning owing to their wealth of knowledge and reasoning capabilities. However, open-source LLMs currently in use do not consistently demonstrate exceptional proficiency in managing event reasoning. This discrepancy arises from insufficient learning of knowledge of event relations and incomplete reasoning paradigms. In this paper, we propose WizardEvent, the hybrid event-aware instruction tuning leading to better event reasoning abilities. Specifically, we first represent the events and relation of the event relational knowledge in a novel structure. We then mine the knowledge from raw text. Second, we introduce the prototypical event reasoning paradigms which include four reasoning formats. Lastly, we wrap our constructed \eqs with our reasoning paradigms to create the instruction tuning dataset. We fine-tune to obtain WizardEvent using this enriched dataset, significantly improving their event reasoning. The performance of WizardEvent is rigorously evaluated through a series of extensive experiments across 10 event reasoning tasks. We also annotate a new dataset for event relational knowledge evaluation. The results from these evaluations demonstrate that WizardEvent substantially outperforms other instruction-tuned models, indicating the success of our approach in enhancing LLMs' proficiency in event reasoning.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: en
0 Replies
Loading