Empowering LLMs to Synthesize AI and Human Intelligence for Explainable Public Health Misinformation Detection on Social Media
Abstract: This paper studies a critical problem of explainable public health misinformation detection on social media, where clear explanations are essential for enhancing user understanding and trust, surpassing the limitations of black-box misinformation detection results. To tackle this problem, there is a growing trend of leveraging collective intelligence from diverse intelligence sources, such as deep neural networks (DNNs), human intelligence, and large language models (LLMs). However, integrating hybrid intelligence from different sources remains a challenge: DNNs excel in accurate and efficient classification, crowd workers provide contextual understanding and readable explanations, and LLMs offer extensive domain knowledge and advanced language generation. Moreover, current crowdsourcing and human-AI collaboration methods mainly focus on aggregating misinformation detection labels using traditional measures like consistency, often overlooking more complex and challenging inputs like textual explanations. We propose SynthX, a collective intelligence framework that incorporates a holistic prompting design to harness the language and reasoning capabilities of LLMs for synthesizing diverse detection and explanation results. It also integrates a novel estimation theory-LLM hybrid approach to assess the varying reliability of detection results from different intelligence sources. Our evaluation on a real-world social media misinformation dataset demonstrates that SynthX consistently outperforms a rich set of state-of-the-art baselines in both detection accuracy and explanation quality.
Loading