NetPrompt: Neural Network Prompting Enhances Event Extraction in Large Language Models

Published: 2025, Last Modified: 14 Jan 2026IEEE Trans. Big Data 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Event Extraction involves extracting event-related information such as event types and event arguments from context, which has long been tackled through well-designed neural networks or fine-tuned pre-trained language models. These approaches require substantial annotated data for tuning parameters and are resource-intensive. Recently, Prompting strategies with frozen parameters, such as Chain-of-Thought and Self-Consistency, have delivered success in NLP using LLMs by generating intermediate thought steps. However, they suffer from the challenge of error propagation and lack of interaction between different thoughts. In this paper, we propose Neural Network-based Prompting (NetPrompt), a novel network-structured prompting strategy for event extraction. The core idea behind NetPrompt is to imitate the excellent information integration capabilities of neural network structures. Specifically, we first decompose the event extraction problem into diverse intermediate subtasks, and each subtask is represented as a node in different layers of the network, the output of the nodes in the preceding layer is fed into the subsequent layer. Secondly, we propose pruning strategies to adapt the reasoning overhead to different problems. Finally, we have conducted extensive experiments on two widely used event extraction benchmarks to evaluate NetPrompt. The results demonstrated that NetPrompt significantly improved the event extraction performance compared to previous methods.
Loading