Adversarial Attacks on Skeleton-Based Sign Language Recognition

Published: 01 Jan 2023, Last Modified: 19 Feb 2025ICIRA (1) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite the impressive performance achieved by sign language recognition systems based on skeleton information, our research has uncovered their vulnerability to malicious attacks. In response to this challenge, we present an adversarial attack specifically designed to sign language recognition models that rely on extracted human skeleton data as features. Our attack aims to assess the robustness and sensitivity of these models, and we propose adversarial training techniques to enhance their resilience. Moreover, we conduct transfer experiments using the generated adversarial samples to demonstrate the transferability of these adversarial examples across different models. Additionally, by conducting experiments on the sensitivity of sign language recognition models, we identify the optimal experimental parameter settings for achieving the most effective attacks. This research significantly contributes to future investigations into the security of sign language recognition.
Loading