Abstract: Sign language (SL) is the primary mode of communication for Deaf and Hard-of-Hearing (DHH) individuals and differs fundamentally from spoken languages. While Sign Language Expression (SLE) systems have made significant progress in generating gestures from text using deep learning, their integration into assistive Human-Robot Interaction (HRI) remains limited. This position paper introduces online SLE as a novel paradigm for enabling responsive, real-time SL communication on robotic platforms. We analyze the technical, dataset, and evaluation challenges in deploying SLE models on robots and present preliminary experiments illustrating the trade-offs between efficiency and expressiveness. We further propose design considerations for online model architectures, identify key gaps in current datasets, and call for interdisciplinary collaboration with the Deaf community. Our goal is to pave the way toward inclusive, socially-aware robotic agents capable of natural SL communication.
External IDs:dblp:conf/ro-man/KhanTN25
Loading