Abstract: Deep learning (DL)-based semantic communication (SC) redefines traditional communication by shifting the focus from reliable bit-by-bit transmission to conveying only task-relevant information, thereby reducing bandwidth usage. How-ever, SC is vulnerable to adversarial attacks due to wireless chan-nel exposure and the susceptibility of DL models to small input perturbations. While extensive research has explored adversarial attacks in image-based SC, there is limited research on text-based adversarial noise targeting text generation models. Therefore, we propose Semantic Perturbation Generator (SemPerGe), the first framework designed to craft targeted adversarial perturbations in transmitted text data within SC. SemPerGe operates without prior knowledge of the DL model architecture, parameters, or logits, instead leveraging off-the-shelf large language models to introduce semantic noise effectively. The framework is composed of two phases: (i) a Significant Token Marker that identifies crucial tokens influencing the semantics of transmitted content and (ii) a Perturbation Generator, which modifies these tokens to subtly alter the content's meaning while preserving linguistic coherence and grammatical structure. We evaluate SemPerGe against four baselines across datasets from two application domains, demonstrating its robustness and adaptability. Additionally, a user study confirms the stealthiness of the generated adversarial texts, with 96 % of participants unable to detect adversarial modifications on average.
External IDs:dblp:conf/cns/AnjumMAARPO25
Loading