State Space Models Are Effective Sign Language Learners: Exploiting Phonological Compositionality for Vocabulary-Scale Recognition

01 Feb 2026 (modified: 04 Mar 2026)Submitted to ICLR 2026 Workshop LMRLEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Track: long paper (up to 10 pages)
Keywords: sign language recognition, state space models, compositional learning, phonological decomposition, few-shot learning, skeleton-based recognition, graph attention networks, prototypical classification, deaf accessibility, ethical AI, assistive AI, daily action recognition
TL;DR: Trained on the largest ASL dataset ever (5,565 classes), PhonSSM disentangles phonological parameters via orthogonal subspaces and anatomical graph attention, achieving skeleton-SOTA 72.08% on WLASL2000—a 15.7pp gain over prior methods.
Abstract: Sign language recognition suffers from catastrophic scaling failure: models achieving high accuracy on small vocabularies collapse at realistic sizes. Existing architectures treat signs as atomic visual patterns, learning flat representations that cannot exploit the compositional structure of sign languages—systematically organized from discrete phonological parameters (handshape, location, movement, orientation) reused across the vocabulary. We introduce PhonSSM, enforcing phonological decomposition through anatomically-grounded graph attention, explicit factorization into orthogonal subspaces, and prototypical classification enabling few-shot transfer. Using skeleton data alone on the largest ASL dataset ever assembled (5,565 signs), PhonSSM achieves 72.1% on WLASL2000 (+18.4pp over skeleton SOTA), surpassing most RGB methods without video input. Gains are most dramatic in the few-shot regime (+225% relative), and the model transfers zero-shot to ASL Citizen, exceeding supervised RGB baselines. The vocabulary scaling bottleneck is fundamentally a representation learning problem, solvable through compositional inductive biases mirroring linguistic structure.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Bryan_Cheng1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Submission Number: 15
Loading