Simulating Inconsistencies in Task-oriented DialogDownload PDF

Anonymous

16 Feb 2022 (modified: 05 May 2023)ACL ARR 2022 February Blind SubmissionReaders: Everyone
Abstract: Most existing dialog models are trained on static dialog datasets or in an interactive way with user simulators, and evaluated in the same way. Such methods mostly make an ideal hypothesis that the user behaves consistently to the goal. Nevertheless, inconsistent behaviors are often observed from real users due to unpredictable mind changes or language understanding errors.In this paper, we give a systematic investigation of the inconsistent problem in real-world dialog systems and introduce three kinds of inconsistencies, namely Goal Change, Action Disloyalty and Understanding Deviation. We propose a user model to simulate those three kinds of inconsistencies, which can be used to examine the model robustness. The simulation model is further utilized to support Reinforcement Learning and inconsistent data augmentation, which boosts the performance of pipeline and end-to-end dialog models under inconsistent situation.
Paper Type: long
0 Replies

Loading