Abstract: Detecting rumors requires a profound understanding of the real-world background. Most existing small language models (SLMs) methods found that extracting external knowledge from knowledge bases (KBs), such as extracting entity concepts and evidence within articles, significantly enhances rumor detection performance. However, two limitations still exist: 1)When extracting entity concepts from KBs, entity ambiguity may introduce inappropriate concepts. 2) Prevailing methods for extracting evidential knowledge rely on individual KBs, limiting the available information scope. Recent large language models (LLMs) have shown remarkable performance in various tasks, but leveraging LLMs' internal knowledge for rumor detection is underexplored. To tackle these limitations, we propose a ChatGPT-based External Knowledge Integration Network (CHKIN) that uses chatgpt to extract Entity Concepts and Evidence to enhance rumor detection. Firstly, CHKIN employs LLM to extract entities and their concepts. In this process, LLM considers contextual content, alleviating issues associated with the ambiguity of entity concepts. Secondly, CHKIN employs LLM, unlike use KBs, to gather evidence. LLM's training knowledge is broader, resulting in more comprehensive and continuous evidence generation. Furthermore, CHKIN serves as a bridge between SLMs and LLMs. Experiments on two real-world datasets demonstrate that CHKIN outperforms three baseline method types: SLM-based, LLM-based, and shallow neural networks.
Loading