Dynamic Sign Language Recognition Based on Convolutional Neural Networks and Texture MapsDownload PDFOpen Website

2019 (modified: 01 Oct 2022)SIBGRAPI 2019Readers: Everyone
Abstract: Sign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
0 Replies

Loading