Abstract: Recent research in zero-shot Relation Extraction (RE) has concentrated on employing Large Language Models (LLMs) as extractors, owing to their notable zero-shot capabilities. By directly prompting the LLM or transforming the task into a Question Answering (QA) problem, the LLM can efficiently extract relations from a given sample. However, current methods often exhibit suboptimal performance, primarily due to the lack of detailed and context-specific prompts necessary for effectively understanding the variety of sentences and relations. To bridge this gap, we introduce the Self-Prompting framework, a novel method designed to fully harness the embedded RE knowledge within LLMs. Specifically, our framework employs a three-stage diversity approach to prompt LLMs, generating multiple synthetic samples that encapsulate specific relations from scratch. These generated samples act as in-context learning samples, offering explicit and context-specific guidance to efficiently prompt LLMs for RE. Experimental evaluations conducted on benchmark datasets have demonstrated the superiority of our approach over existing LLM-based zero-shot RE methods. Furthermore, our experiments highlight the effectiveness of our generation pipeline in producing high-quality synthetic data that significantly enhances performance.
Paper Type: long
Research Area: Information Extraction
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies
Loading