Large Language Models as Sign Language Interfaces: Mitigating the Requests of Deaf Users of LLMs in a Hearing-Centric World

ACL ARR 2024 December Submission1395 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Deaf or Hard-of-Hearing (DHH) individuals use Large Language Models (LLMs) in unique ways and request to incorporate sign language grammar and Deaf culture during the training of these models, in addition to video-based sign language input capabilities. Yet, developers of instruct-tuned LLMs have not paid attention to these requests. Instead, special translation models are developed for sign languages (SLs), diminishing the needs of signers to a simple lack of communication between hearing and Deaf communities. In this paper, we take an orthogonal approach to these traditional methods of studying SLs. To meet the requests of Deaf users of LLMs, we look at the sign language processing (SLP) algorithm from a theoretical lens, then introduce the first text-based and multimodal LLMs. We propose new prompting and fine-tuning strategies for text-based and multimodal SLP, incorporating sign linguistic rules and conventions. We test the generalization of these models to other SLP tasks, showing LLMs can process signs while still being adept at spoken language tasks. Our code and model checkpoints will be open-source. We will update our model suite as newer open-source LLMs, datasets, and SLP tasks become available.

Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: user-centered design, value-centered design, human factors in NLP, participatory/community-based NLP, values and culture, fine-tuning, less-resourced languages, resources for less-resourced languages, software and tools, multimodality
Contribution Types: Approaches to low-resource settings, Publicly available software and/or pre-trained models, Position papers, Theory
Languages Studied: German Sign Language, German, English
Submission Number: 1395
Loading