State Space Models are Effective Sign Language Learners: Exploiting Phonological Compositionality for Vocabulary-Scale Recognition
Track: Main Papers Track (6 to 9 pages)
Keywords: Sign language recognition, phonological decomposition, compositional representation learning, state space models, graph attention networks, skeleton-based recognition, disentangled representations, few-shot learning, American Sign Language, large-vocabulary recognition, prototypical classification, pose estimation, orthogonal subspaces, zero-shot transfer, accessibility
TL;DR: Trained on the largest ASL dataset ever (5,565 classes), PhonSSM disentangles phonological parameters via orthogonal subspaces and anatomical graph attention, achieving skeleton-SOTA 72.08% on WLASL2000—a 15.7pp gain over prior methods.
Abstract: Sign language recognition suffers from catastrophic scaling failure: models achieving high accuracy on small vocabularies collapse at realistic sizes. Existing architectures treat signs as atomic visual patterns, learning flat representations that cannot exploit the compositional structure of sign languages—systematically organized from discrete phonological parameters (handshape, location, movement, orientation) reused across the vocabulary. We introduce PHONSSM, enforcing phonological decomposition through anatomically-grounded graph attention, explicit factorization into orthogonal subspaces, and prototypical classification enabling few-shot transfer. Using skeleton data alone on the largest ASL dataset ever assembled (5,565 signs), PHONSSM achieves 72.1% on WLASL2000 (+18.4pp over skeleton SOTA), surpassing most RGB methods without video input. Gains are most dramatic in the few-shot regime (+225% relative), and the model transfers zero-shot to ASL Citizen, exceeding supervised RGB baselines. The vocabulary scaling bottleneck is fundamentally a representation learning problem, solvable through compositional inductive biases mirroring linguistic structure.
Submission Number: 14
Loading