How LLMs Reinforce Political Misinformation: Insights from the Analysis of False Presuppositions

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: LLMs, Political Misinformation, False Presuppositions, Linguistic Presupposition Analysis
TL;DR: This study analyzes how large language models handle and potentially reinforce political misinformation through linguistic presupposition analysis.
Abstract: This paper investigates how large language models (LLMs) handle political misinformation through the lens of linguistic presupposition analysis. Spreading misinformation is an increasingly popular strategy used by populists to polarize the electorate. As LLMs become increasingly influential in shaping public discourse and decision-making via applications like chatbots and content recommendation systems, they should be able to deal with wrong assumptions users make shaped by their political bias. To address these issues, our study utilizes a systematic approach to analyze how LLMs handle false presuppositions – instances where the presupposed information is incorrect. We conduct two experiments with distinct datasets, to explore how factors such as linguistic constructions, embedding contexts, and scenario probabilities impact LLMs' recognition of false presuppositions. Although final results are pending, preliminary observations suggest that LLMs face challenges in identifying false presuppositions, with performance varying based on specific conditions. These insights indicate that linguistic presupposition analysis is a valuable tool for uncovering and understanding the reinforcement of political misinformation in LLM responses, contributing to more transparent and reliable AI systems.
Copyright PDF: pdf
Submission Number: 49
Loading