ZeST: A Zero-Resourced Speech-to-Speech Translation Approach for Unknown, Unpaired, and Untranscribed Languages
Abstract: Speech-to-speech translation (S2ST) has emerged as a practical solution for overcoming linguistic barriers, enabling direct translation between spoken languages without relying on intermediate text representations. However, existing S2ST systems face significant challenges, including the requirement for extensive parallel speech data and the limitations of known written languages. This paper proposes ZeST, a novel zero-resourced approach to speech-to-speech translation that addresses the challenges of processing unknown, unpaired, and untranscribed languages. ZeST consists of two main phases: (1) Discovering semantically related speech pairs from unpaired data by leveraging self-supervised visually grounded speech (VGS) models and (2) Achieving textless speech-to-speech translation for untranscribed languages using discrete speech representations and sequence-to-sequence modeling. Experimental evaluations using three different data scenarios demonstrate that the ZeST system effectively performs direct speech-to-speech translation without relying on transcribed data or parallel corpora. The experimental results highlight the potential of ZeST in contributing to the field of zero-resourced speech processing and improving communication in multilingual societies.
Loading