Abstract: Sign Language Production (SLP) has achieved promising progress in offline settings, where full input text is available before generation. However, such methods are unsuitable for real-time applications requiring low latency. In this work, we introduce Simultaneous Sign Language Production (SimulSLP), a new task that generates sign pose sequences incrementally from streaming text input. We first formalize the SimulSLP task and adapt the Average Token Delay metric to quantify latency. Then, we benchmark this task using three strong baselines from offline SLP—an end-to-end system and two cascaded pipelines with neural and dictionary-based Gloss-to-Pose modules—under a wait-$k$ policy. However, all baselines suffer from a mismatch between full-sequence training and partial-input inference. To mitigate this, we propose a Future-Context-Aware Inference (FCAI) strategy. FCAI enhances partial input representations by predicting a small number of future tokens using a large language model. Before decoding, speculative features from the predicted tokens are discarded to ensure alignment with the observed input. Experiments on PHOENIX2014 T show that FCAI significantly improves the quality-latency trade-off, especially in low-latency settings, offering a promising step toward SimulSLP.
External IDs:doi:10.1109/lsp.2025.3610359
Loading