Inducing Global and Local Knowledge Attention in Multi-turn Dialog UnderstandingDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: In multi-turn dialog understanding, semantic frames are constructed by detecting intents and slots within each user utterance. However, recent works lack the capability of modeling multi-turn dynamics within a dialog where the contexts are mostly adopted for updating dialog states instead of capturing overall intent semantic flows in spoken language understanding (SLU). Moreover, external knowledge related to dialogs may be beneficial in exploring deep semantic information across dialog turns, which many works only considered for end-to-end response generation. In this paper, we propose to equip a BERT-based joint framework with a context attention module and a knowledge attention module to introduce knowledge attention with contexts between two SLU tasks. We propose three attention mechanisms to induce both global and local attention on knowledge triples. Experimental results in two complicated multi-turn dialog datasets have demonstrated significant improvements of our proposed framework by mutually modeling two SLU tasks with filtered knowledge and dialog contexts. Attention visualization also provides nice interpretability of how our modules leverage knowledge across the utterance.
0 Replies

Loading