Personalized Attacks of Social Engineering in Multi-turn Conversations - LLM Agents for Simulation and Detection
Abstract: The rapid advancement of conversational agents, particularly, chatbots powered by Large Language Models (LLMs), poses a significant risk of social engineering (SE) attacks on social media platforms. SE detection in multi-turn, chat-based interactions is considerably more complex than single-instance detection due to the dynamic nature of these conversations. A critical factor in mitigating this threat is understanding the mechanisms through which SE attacks operate, specifically how attackers exploit vulnerabilities and how victims' personality traits contribute to their susceptibility. In this work, we propose an LLM-agentic framework, SE-VSim, to simulate SE attack mechanisms by generating realistic multi-turn conversations. We model victim agents with varying personality traits to assess how psychological profiles influence susceptibility to manipulation. Using a dataset of over 1,000 simulated conversations, we examine attack scenarios in which adversaries, posing as recruiters, funding agencies, and journalists, attempt to extract sensitive information. Based on this analysis, we present a proof of concept, SE-OmniGuard to offer personalized protection to users by leveraging prior knowledge of the victim’s personality, evaluating attack strategies, and monitoring information exchanges in conversations to identify potential SE attempts. Our code and data are available at following repository: https://anonymous.4open.science/r/AI-agentic-social-eng-defense-1D52
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: security/privacy
Contribution Types: Publicly available software and/or pre-trained models, Data resources, Data analysis
Languages Studied: English
Submission Number: 1327
Loading