Abstract: In American Sign Language (ASL), a prominent resource gap exists for affective computing datasets. This manuscript explores preliminary findings from an ongoing multimodal ASL corpus collection and analysis study, focusing on human-generated modalities (e.g., eye tracking, facial expression, head movement) and the expression of frustration and confusion among deaf and hard of hearing study participants. These affective states can be important for understanding user experiences in human-computer interaction or for offering system feedback towards enhancing AI-human collaboration. Expanding a data collection methodology from prior work involving English-speaking participants, this exploratory study seeks to discern characteristics associated with confused or frustrated affect states in collected signed language interactions. Such insights have the potential to facilitate the development of models capable of recognizing emotional expressions. Initial results reveal distinctions in the characteristics of self-annotated instances of participant frustration and confusion, with certain features showing some divergence between the two emotions.
External IDs:doi:10.1145/3663548.3688500
Loading