GraphDiffs: Graph Modeling with Differential Sequence for Document-Grounded ConversationDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Knowledge grounded dialogue systems need to incorporate natural transitions between knowledge for dialogue to flow smoothly. Current systems not only lack good structured representations for knowledge that span multiple documents, but also effective algorithms that utilize such resources. We design a Co-Referential Multi-Document Graph(CoRM-DoG) that seamlessly captures inter-document correlations and intra-document co-referential knowledge relations. To best linearise this static graph into sequential dialogues, we contribute a Graph Modeling with Differential Sequence (GraphDiffs) method for knowledge transitions in dialogue. GraphDiffs performs knowledge selection by natively accounting for contextual graph structure and introducing differential sequence learning to effectively learn multi-turn knowledge transitions. Our analysis shows that GraphDiffs based on CoRM-DoG significantly outperforms the current state-of-the-art by 9.5\% and 7.4% on two public benchmarks, WoW and Holle-E, where the modeling of co-reference and differential sequence are critical factors for its success.
Paper Type: long
0 Replies

Loading