Deepfakes in Developing Societies: Handling the Societal Impacts and Cross-Disciplinary Vulnerabilities in Tech-Limited Environments
Keywords: Deepfakes, Misinformation, Large Language Models, Computational Social Science, Cross-Disciplinary Vulnerabilities, Developing Countries, Tech-Limited Environments
TL;DR: The study presents a comprehensive framework for managing deepfakes, focusing on prevention, detection, and mitigation strategies tailored to tech-limited environments, while offering actionable insights to support vulnerable populations.
Abstract: Deepfakes, AI-generated multimedia content that includes images, videos, audio, and text designed to mimic real media, have become increasingly prevalent. Their rise poses substantial risks to political stability, social trust, and economic well-being, especially in developing societies with limited media literacy and technological infrastructure. The motivation for this work stems from the urgent need to understand how these technologies are perceived and how they affect communities with limited resources to combat misinformation. We conducted a detailed survey to assess public awareness, perceptions, and experiences with deepfakes, followed by the development of a comprehensive framework for managing their impact. The framework addresses prevention, detection, and mitigation of deepfakes, providing practical strategies tailored for tech-limited environments. Our findings reveal a critical knowledge gap and a lack of effective detection tools, highlighting the need for targeted public education and accessible verification tools. In conclusion, this work offers actionable insights to support vulnerable populations in managing the challenges posed by deepfakes and calls for further interdisciplinary efforts to tackle these issues.
Submission Number: 7
Loading