Teaching BERT to Wait: Balancing Accuracy and Latency for Streaming Disfluency DetectionDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=lln-R37yXqQ
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: In modern interactive speech-based systems, speech is consumed and transcribed incrementally prior to having disfluencies removed. While this post-processing step is crucial for producing clean transcripts and high performance on downstream tasks (e.g. machine translation), most current state-of-the-art NLP models such as the Transformer operate non-incrementally, potentially causing unacceptable delays for the user. In this work we propose a streaming BERT-based sequence tagging model that, combined with a novel training objective, is capable of detecting disfluencies in real-time while balancing accuracy and latency. This is accomplished by training the model to decide whether to immediately output a prediction for the current input or to wait for further context, in essence learning to dynamically size the lookahead window. Our results demonstrate that our model produces comparably accurate predictions and does so sooner than our baselines, with lower flicker. Furthermore, the model attains state-of-the-art latency and stability scores when compared with recent work on incremental disfluency detection.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC-4
Copyright Consent Signature (type Name Or NA If Not Transferrable): Angelica Chen
Copyright Consent Name And Address: NYU Center for Data Science, 60 5th Ave, New York, NY 10011
0 Replies

Loading