Improving Long-form Speech Translation through Segmentation with Large Language Models and Finite State Decoding ConstraintsDownload PDF

Anonymous

17 Feb 2023 (modified: 05 May 2023)ACL ARR 2023 February Blind SubmissionReaders: Everyone
Abstract: One challenge in spoken language translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we adapt large language models (LLM) to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. To combat the tendency of hallucination by LLMs, we incorporate finite-state constraints during decoding to eliminate invalid outputs. We discover that LLMs are adaptable to transcripts containing ASR errors through prompt-tuning or fine-tuning. In comparison to a state-of-the-art automatic punctuation baseline, our best LLM improves the average BLEU for English-German, English-Spanish, and English-Arabic TED talk translation in 9 test sets by 2.9 points, just by improving segmentation.
Paper Type: long
Research Area: Syntax: Tagging, Chunking and Parsing / ML
0 Replies

Loading