Abstract: Classical continuous sign language recognition (CSLR) suffers from some main challenges in real-world scenarios: accurate inter-frame movement trajectories may fail to be captured by traditional RGB cameras due to the motion blur, and valid information may be insufficient under lowillumination scenarios. In this paper, we for the first time leverage an event camera to overcome the above-mentioned challenges. Event cameras are bio-inspired vision sensors that could efficiently record high-speed sign language movements under lowillumination scenarios and capture human information while eliminating redundant background interference. To fully exploit the benefits of the event camera for CSLR, we propose a novel event-guided multi-modal CSLR framework, which could achieve significant performance under complex scenarios. Specifically, a time redundancy correction (TRCorr) module is proposed to rectify redundant information in the temporal sequences, directing the model to focus on distinctive features. A multimodal cross-attention interaction (MCAI) module is proposed to facilitate information fusion between events and frame domains. Furthermore, we construct the first event-based CSLR dataset, named EvCSLR, which will be released as the first eventbased CSLR benchmark. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on EvCSLR and PHOENIX-2014T datasets.
Loading