Inducing Global and Local Knowledge Attention in Multi-turn Dialog UnderstandingDownload PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)Submitted to NLP for ConvAIReaders: Everyone
Keywords: Knowledge, context, dialog, attention, BERT
TL;DR: Context and knowledge attention in spoken language understanding
Abstract: In multi-turn dialog understanding, semantic frames are constructed by detecting intents and slots within each user utterance. However, recent works lack the capability of modeling multi-turn dynamics within a dialog where the contexts are mostly adopted for updating dialog states instead of capturing overall intent semantic flows in spoken language understanding (SLU). Moreover, humans rely on commonsense knowledge to better illustrate slot semantics revealed from word connotations, which many works only considered for end-to-end response generation. In this paper, we propose to amend the research gap by equipping a BERT-based SLU framework with knowledge and context attention modules. We propose three attention mechanisms to induce both global and local attention on knowledge triples. Experimental results in two complicated multi-turn dialog datasets have demonstrated significant improvements of our proposed framework by mutually modeling two SLU tasks with commonsense knowledge and dialog contexts. Attention visualization also provides nice interpretability of how our modules leverage knowledge across the utterance.
0 Replies

Loading