TL;DR: Speech-based Slot Filling using Large Language Models
Abstract: Recently, advancements in large language models (LLMs) have shown an unprecedented ability across various language tasks. This paper investigates the potential application of LLMs to slot filling with noisy ASR transcriptions, via both in-context learning and task-specific fine-tuning. Dedicated prompt designs and fine-tuning approaches are proposed to improve the robustness of LLMs for slot filling with noisy ASR transcriptions. Moreover, a linearised knowledge injection (LKI) scheme is also proposed to integrate dynamic external knowledge into LLMs. Experiments were performed on SLURP to quantify the performance of LLMs, including GPT-3.5-turbo, GPT-4, LLaMA-13B, LLaMA-2-13B and Vicuna-13B (v1.1 and v1.5) with different ASR error rates. The use of the proposed fine-tuning approach together with the LKI scheme for Vicuna-13B-v1.5 achieved 8.1% and 21.5% absolute SLU-F1 improvements compared to the strong Flan-T5-base baseline system on the limited data setup and the zero-shot setup respectively.
Paper Type: long
Research Area: Speech recognition, text-to-speech and spoken language understanding
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading