Framework for Measuring the Similarity of Visual and Semantic Structures in Sign Languages

Published: 01 Jan 2024, Last Modified: 22 May 2025IW-FCV 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Sign languages are visual languages used by deaf and hard of hearing communities worldwide. As sign languages have been manually designed in an optimal visual and semantic aspect, these two representations are expected to share similar structures. This assumption is valuable due to its applicability for further analysis of the intrinsic system that empowers sign languages. By understanding the relationship between a sign and its semantic meaning, we can better design new signs. To verify this assumption, we propose a framework for measuring similarities between visual and semantic structures in sign languages. In our approach, we first introduce two vector spaces: a visual-space, which encodes the sign’s visual features, and a semantic-space, which encodes the sign’s semantic features. We then project data on the two spaces and generate two sets of 3D data points. Finally, we define a qualitative metric, called the Communicability, by measuring the structural similarity between the two sets of data points using shape-subspaces. This metric is demonstrated by measuring the mean Communicability calculated in a Japanese Sign Language dataset.
Loading