Abstract: We are daily exposed to fake news, a growing problem that spreads in various forms, including rumours, advertisements, social media posts, and political propaganda. Predominantly created by humans, in recent years, we have witnessed an increase of digital content fabricated or manipulated with the use of deep learning. Large Language Models (LLMs), for instance, represent a real threat if used to generate highly convincing fake news that could evade conventional detection systems. This study evaluates the impact of LLM-generated fake news on machine learning (ML) classifiers. The ML models are trained with Portuguese-language datasets and experiments are conducted using aligned data, where each fake news sample has its true news counterpart. We assess the performance of each ML model with synthetic fake news, which was generated using a Portuguese-based LLM, namely Sabiá-3. Our results reveal significant performance degradation of ML models when assessed under mismatch conditions, e.g., when they are trained with human-generated content, and tested with LLM-generated fake news (or vice-versa). These findings highlight the need for updated detection strategies capable of handling the linguistic and stylistic nuances introduced by LLMs. To address that, a Retrieval-Augmented Generation (RAG) framework was evaluated under the same conditions as the ML models. The framework showed to be more robust under mismatch conditions, whereas ML models provided better performance when there was no distribution shift between train and test data. These results contribute to the understanding of fake news detection in Portuguese, emphasizing the importance of adapting existing models to the evolving nature of misleading LLM-generated content.
External IDs:dblp:journals/jbcs/SilvaAFSA25
Loading